00:00:00.000 Started by upstream project "autotest-per-patch" build number 126222 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.016 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.017 The recommended git tool is: git 00:00:00.018 using credential 00000000-0000-0000-0000-000000000002 00:00:00.020 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.037 Fetching changes from the remote Git repository 00:00:00.042 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.070 Using shallow fetch with depth 1 00:00:00.070 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.070 > git --version # timeout=10 00:00:00.096 > git --version # 'git version 2.39.2' 00:00:00.096 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.130 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.130 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.299 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.311 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.322 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:02.322 > git config core.sparsecheckout # timeout=10 00:00:02.334 > git read-tree -mu HEAD # timeout=10 00:00:02.350 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:02.369 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:02.370 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:02.450 [Pipeline] Start of Pipeline 00:00:02.466 [Pipeline] library 00:00:02.467 Loading library shm_lib@master 00:00:02.468 Library shm_lib@master is cached. Copying from home. 00:00:02.483 [Pipeline] node 00:00:02.497 Running on WFP29 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:02.498 [Pipeline] { 00:00:02.509 [Pipeline] catchError 00:00:02.511 [Pipeline] { 00:00:02.526 [Pipeline] wrap 00:00:02.537 [Pipeline] { 00:00:02.547 [Pipeline] stage 00:00:02.549 [Pipeline] { (Prologue) 00:00:02.748 [Pipeline] sh 00:00:03.032 + logger -p user.info -t JENKINS-CI 00:00:03.048 [Pipeline] echo 00:00:03.049 Node: WFP29 00:00:03.056 [Pipeline] sh 00:00:03.351 [Pipeline] setCustomBuildProperty 00:00:03.363 [Pipeline] echo 00:00:03.364 Cleanup processes 00:00:03.369 [Pipeline] sh 00:00:03.651 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:03.651 549758 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:03.665 [Pipeline] sh 00:00:03.949 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:03.949 ++ grep -v 'sudo pgrep' 00:00:03.949 ++ awk '{print $1}' 00:00:03.949 + sudo kill -9 00:00:03.949 + true 00:00:03.963 [Pipeline] cleanWs 00:00:03.971 [WS-CLEANUP] Deleting project workspace... 00:00:03.971 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.977 [WS-CLEANUP] done 00:00:03.981 [Pipeline] setCustomBuildProperty 00:00:03.993 [Pipeline] sh 00:00:04.269 + sudo git config --global --replace-all safe.directory '*' 00:00:04.353 [Pipeline] httpRequest 00:00:04.380 [Pipeline] echo 00:00:04.382 Sorcerer 10.211.164.101 is alive 00:00:04.389 [Pipeline] httpRequest 00:00:04.394 HttpMethod: GET 00:00:04.394 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.395 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.398 Response Code: HTTP/1.1 200 OK 00:00:04.398 Success: Status code 200 is in the accepted range: 200,404 00:00:04.399 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.989 [Pipeline] sh 00:00:05.268 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.283 [Pipeline] httpRequest 00:00:05.309 [Pipeline] echo 00:00:05.310 Sorcerer 10.211.164.101 is alive 00:00:05.318 [Pipeline] httpRequest 00:00:05.323 HttpMethod: GET 00:00:05.323 URL: http://10.211.164.101/packages/spdk_a22f117fe5f0b0fdd392a07d6811ed9bd7a0a55f.tar.gz 00:00:05.324 Sending request to url: http://10.211.164.101/packages/spdk_a22f117fe5f0b0fdd392a07d6811ed9bd7a0a55f.tar.gz 00:00:05.338 Response Code: HTTP/1.1 200 OK 00:00:05.338 Success: Status code 200 is in the accepted range: 200,404 00:00:05.338 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_a22f117fe5f0b0fdd392a07d6811ed9bd7a0a55f.tar.gz 00:00:31.464 [Pipeline] sh 00:00:31.747 + tar --no-same-owner -xf spdk_a22f117fe5f0b0fdd392a07d6811ed9bd7a0a55f.tar.gz 00:00:34.294 [Pipeline] sh 00:00:34.614 + git -C spdk log --oneline -n5 00:00:34.614 a22f117fe nvme/perf: Use sqthread_poll_cpu for io_uring workloads 00:00:34.614 719d03c6a sock/uring: only register net impl if supported 00:00:34.614 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:34.614 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:34.614 6c7c1f57e accel: add sequence outstanding stat 00:00:34.628 [Pipeline] } 00:00:34.646 [Pipeline] // stage 00:00:34.656 [Pipeline] stage 00:00:34.659 [Pipeline] { (Prepare) 00:00:34.681 [Pipeline] writeFile 00:00:34.700 [Pipeline] sh 00:00:34.985 + logger -p user.info -t JENKINS-CI 00:00:35.000 [Pipeline] sh 00:00:35.285 + logger -p user.info -t JENKINS-CI 00:00:35.299 [Pipeline] sh 00:00:35.583 + cat autorun-spdk.conf 00:00:35.583 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:35.583 SPDK_TEST_FUZZER_SHORT=1 00:00:35.583 SPDK_TEST_FUZZER=1 00:00:35.583 SPDK_RUN_UBSAN=1 00:00:35.591 RUN_NIGHTLY=0 00:00:35.597 [Pipeline] readFile 00:00:35.627 [Pipeline] withEnv 00:00:35.630 [Pipeline] { 00:00:35.645 [Pipeline] sh 00:00:35.932 + set -ex 00:00:35.932 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:00:35.932 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:35.932 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:35.932 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:35.932 ++ SPDK_TEST_FUZZER=1 00:00:35.932 ++ SPDK_RUN_UBSAN=1 00:00:35.932 ++ RUN_NIGHTLY=0 00:00:35.932 + case $SPDK_TEST_NVMF_NICS in 00:00:35.932 + DRIVERS= 00:00:35.932 + [[ -n '' ]] 00:00:35.932 + exit 0 00:00:35.942 [Pipeline] } 00:00:35.962 [Pipeline] // withEnv 00:00:35.968 [Pipeline] } 00:00:35.987 [Pipeline] // stage 00:00:35.998 [Pipeline] catchError 00:00:36.000 [Pipeline] { 00:00:36.016 [Pipeline] timeout 00:00:36.016 Timeout set to expire in 30 min 00:00:36.018 [Pipeline] { 00:00:36.035 [Pipeline] stage 00:00:36.037 [Pipeline] { (Tests) 00:00:36.053 [Pipeline] sh 00:00:36.339 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:36.339 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:36.339 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:00:36.339 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:00:36.339 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:36.339 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:36.339 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:00:36.339 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:36.339 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:36.339 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:36.339 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:00:36.339 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:36.339 + source /etc/os-release 00:00:36.339 ++ NAME='Fedora Linux' 00:00:36.339 ++ VERSION='38 (Cloud Edition)' 00:00:36.339 ++ ID=fedora 00:00:36.339 ++ VERSION_ID=38 00:00:36.339 ++ VERSION_CODENAME= 00:00:36.339 ++ PLATFORM_ID=platform:f38 00:00:36.339 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:36.339 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:36.339 ++ LOGO=fedora-logo-icon 00:00:36.339 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:36.339 ++ HOME_URL=https://fedoraproject.org/ 00:00:36.339 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:36.339 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:36.339 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:36.339 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:36.339 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:36.339 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:36.339 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:36.339 ++ SUPPORT_END=2024-05-14 00:00:36.339 ++ VARIANT='Cloud Edition' 00:00:36.339 ++ VARIANT_ID=cloud 00:00:36.339 + uname -a 00:00:36.339 Linux spdk-wfp-29 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:36.339 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:00:39.634 Hugepages 00:00:39.634 node hugesize free / total 00:00:39.634 node0 1048576kB 0 / 0 00:00:39.634 node0 2048kB 0 / 0 00:00:39.634 node1 1048576kB 0 / 0 00:00:39.634 node1 2048kB 0 / 0 00:00:39.634 00:00:39.634 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:39.634 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:39.634 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:39.634 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:39.634 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:39.634 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:39.634 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:39.634 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:39.634 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:39.634 NVMe 0000:5e:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:39.634 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:39.634 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:39.634 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:39.634 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:39.634 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:39.634 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:39.634 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:39.634 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:39.634 NVMe 0000:af:00.0 8086 2701 1 nvme nvme1 nvme1n1 00:00:39.894 NVMe 0000:b0:00.0 8086 2701 1 nvme nvme2 nvme2n1 00:00:39.894 + rm -f /tmp/spdk-ld-path 00:00:39.894 + source autorun-spdk.conf 00:00:39.894 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:39.894 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:39.894 ++ SPDK_TEST_FUZZER=1 00:00:39.894 ++ SPDK_RUN_UBSAN=1 00:00:39.894 ++ RUN_NIGHTLY=0 00:00:39.894 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:39.894 + [[ -n '' ]] 00:00:39.894 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:39.894 + for M in /var/spdk/build-*-manifest.txt 00:00:39.894 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:39.894 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:39.894 + for M in /var/spdk/build-*-manifest.txt 00:00:39.894 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:39.894 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:39.894 ++ uname 00:00:39.894 + [[ Linux == \L\i\n\u\x ]] 00:00:39.894 + sudo dmesg -T 00:00:39.894 + sudo dmesg --clear 00:00:39.894 + dmesg_pid=550728 00:00:39.894 + [[ Fedora Linux == FreeBSD ]] 00:00:39.894 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:39.894 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:39.894 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:39.894 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:39.894 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:39.894 + [[ -x /usr/src/fio-static/fio ]] 00:00:39.894 + export FIO_BIN=/usr/src/fio-static/fio 00:00:39.894 + FIO_BIN=/usr/src/fio-static/fio 00:00:39.894 + sudo dmesg -Tw 00:00:39.894 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:39.894 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:39.894 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:39.894 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:39.894 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:39.894 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:39.894 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:39.894 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:39.894 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:39.894 Test configuration: 00:00:39.894 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:39.894 SPDK_TEST_FUZZER_SHORT=1 00:00:39.894 SPDK_TEST_FUZZER=1 00:00:39.894 SPDK_RUN_UBSAN=1 00:00:40.155 RUN_NIGHTLY=0 18:55:20 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:00:40.155 18:55:20 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:40.155 18:55:20 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:40.155 18:55:20 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:40.155 18:55:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:40.155 18:55:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:40.155 18:55:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:40.155 18:55:20 -- paths/export.sh@5 -- $ export PATH 00:00:40.155 18:55:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:40.155 18:55:20 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:00:40.155 18:55:20 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:40.155 18:55:20 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721062520.XXXXXX 00:00:40.155 18:55:20 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721062520.ul7maK 00:00:40.155 18:55:20 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:40.155 18:55:20 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:40.155 18:55:20 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:00:40.155 18:55:20 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:40.155 18:55:20 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:40.155 18:55:20 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:40.155 18:55:20 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:40.155 18:55:20 -- common/autotest_common.sh@10 -- $ set +x 00:00:40.155 18:55:20 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:40.155 18:55:20 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:40.155 18:55:20 -- pm/common@17 -- $ local monitor 00:00:40.155 18:55:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:40.155 18:55:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:40.155 18:55:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:40.155 18:55:20 -- pm/common@21 -- $ date +%s 00:00:40.155 18:55:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:40.155 18:55:20 -- pm/common@21 -- $ date +%s 00:00:40.155 18:55:20 -- pm/common@25 -- $ sleep 1 00:00:40.155 18:55:20 -- pm/common@21 -- $ date +%s 00:00:40.155 18:55:20 -- pm/common@21 -- $ date +%s 00:00:40.155 18:55:20 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721062520 00:00:40.155 18:55:20 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721062520 00:00:40.155 18:55:20 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721062520 00:00:40.155 18:55:20 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721062520 00:00:40.155 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721062520_collect-vmstat.pm.log 00:00:40.155 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721062520_collect-cpu-load.pm.log 00:00:40.155 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721062520_collect-cpu-temp.pm.log 00:00:40.155 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721062520_collect-bmc-pm.bmc.pm.log 00:00:41.094 18:55:21 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:41.094 18:55:21 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:41.094 18:55:21 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:41.094 18:55:21 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:41.094 18:55:21 -- spdk/autobuild.sh@16 -- $ date -u 00:00:41.094 Mon Jul 15 04:55:21 PM UTC 2024 00:00:41.094 18:55:21 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:41.094 v24.09-pre-203-ga22f117fe 00:00:41.094 18:55:21 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:41.094 18:55:21 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:41.094 18:55:21 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:41.094 18:55:21 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:41.094 18:55:21 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:41.094 18:55:21 -- common/autotest_common.sh@10 -- $ set +x 00:00:41.353 ************************************ 00:00:41.353 START TEST ubsan 00:00:41.353 ************************************ 00:00:41.353 18:55:21 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:41.353 using ubsan 00:00:41.353 00:00:41.353 real 0m0.001s 00:00:41.353 user 0m0.000s 00:00:41.353 sys 0m0.000s 00:00:41.353 18:55:21 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:41.353 18:55:21 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:41.353 ************************************ 00:00:41.353 END TEST ubsan 00:00:41.353 ************************************ 00:00:41.353 18:55:21 -- common/autotest_common.sh@1142 -- $ return 0 00:00:41.353 18:55:21 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:41.353 18:55:21 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:41.353 18:55:21 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:41.353 18:55:21 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:00:41.353 18:55:21 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:00:41.353 18:55:21 -- common/autobuild_common.sh@432 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:00:41.354 18:55:21 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:00:41.354 18:55:21 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:41.354 18:55:21 -- common/autotest_common.sh@10 -- $ set +x 00:00:41.354 ************************************ 00:00:41.354 START TEST autobuild_llvm_precompile 00:00:41.354 ************************************ 00:00:41.354 18:55:21 autobuild_llvm_precompile -- common/autotest_common.sh@1123 -- $ _llvm_precompile 00:00:41.354 18:55:21 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:00:41.354 18:55:21 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 16.0.6 (Fedora 16.0.6-3.fc38) 00:00:41.354 Target: x86_64-redhat-linux-gnu 00:00:41.354 Thread model: posix 00:00:41.354 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:00:41.354 18:55:21 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=16 00:00:41.354 18:55:21 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-16 00:00:41.354 18:55:21 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-16 00:00:41.354 18:55:21 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-16 00:00:41.354 18:55:21 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-16 00:00:41.354 18:55:21 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:00:41.354 18:55:21 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:00:41.354 18:55:21 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a ]] 00:00:41.354 18:55:21 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a' 00:00:41.354 18:55:21 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:00:41.613 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:00:41.614 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:00:42.182 Using 'verbs' RDMA provider 00:00:58.007 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:12.911 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:12.911 Creating mk/config.mk...done. 00:01:12.911 Creating mk/cc.flags.mk...done. 00:01:12.911 Type 'make' to build. 00:01:12.911 00:01:12.911 real 0m30.779s 00:01:12.911 user 0m13.308s 00:01:12.911 sys 0m16.927s 00:01:12.911 18:55:52 autobuild_llvm_precompile -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:12.911 18:55:52 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:12.911 ************************************ 00:01:12.911 END TEST autobuild_llvm_precompile 00:01:12.911 ************************************ 00:01:12.911 18:55:52 -- common/autotest_common.sh@1142 -- $ return 0 00:01:12.911 18:55:52 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:12.911 18:55:52 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:12.911 18:55:52 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:12.911 18:55:52 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:12.911 18:55:52 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:12.911 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:12.911 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:12.911 Using 'verbs' RDMA provider 00:01:26.063 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:38.278 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:38.537 Creating mk/config.mk...done. 00:01:38.537 Creating mk/cc.flags.mk...done. 00:01:38.537 Type 'make' to build. 00:01:38.537 18:56:18 -- spdk/autobuild.sh@69 -- $ run_test make make -j72 00:01:38.537 18:56:18 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:38.537 18:56:18 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:38.537 18:56:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.537 ************************************ 00:01:38.537 START TEST make 00:01:38.537 ************************************ 00:01:38.537 18:56:18 make -- common/autotest_common.sh@1123 -- $ make -j72 00:01:39.105 make[1]: Nothing to be done for 'all'. 00:01:41.020 The Meson build system 00:01:41.020 Version: 1.3.1 00:01:41.020 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:01:41.020 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:41.020 Build type: native build 00:01:41.020 Project name: libvfio-user 00:01:41.020 Project version: 0.0.1 00:01:41.020 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:01:41.020 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:01:41.020 Host machine cpu family: x86_64 00:01:41.020 Host machine cpu: x86_64 00:01:41.020 Run-time dependency threads found: YES 00:01:41.020 Library dl found: YES 00:01:41.020 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:41.020 Run-time dependency json-c found: YES 0.17 00:01:41.020 Run-time dependency cmocka found: YES 1.1.7 00:01:41.020 Program pytest-3 found: NO 00:01:41.020 Program flake8 found: NO 00:01:41.020 Program misspell-fixer found: NO 00:01:41.020 Program restructuredtext-lint found: NO 00:01:41.020 Program valgrind found: YES (/usr/bin/valgrind) 00:01:41.020 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:41.020 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:41.020 Compiler for C supports arguments -Wwrite-strings: YES 00:01:41.020 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:41.020 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:41.020 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:41.020 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:41.020 Build targets in project: 8 00:01:41.020 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:41.020 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:41.020 00:01:41.020 libvfio-user 0.0.1 00:01:41.020 00:01:41.020 User defined options 00:01:41.020 buildtype : debug 00:01:41.020 default_library: static 00:01:41.020 libdir : /usr/local/lib 00:01:41.020 00:01:41.020 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:41.020 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:41.020 [1/36] Compiling C object samples/lspci.p/lspci.c.o 00:01:41.020 [2/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:41.020 [3/36] Compiling C object samples/null.p/null.c.o 00:01:41.020 [4/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:01:41.020 [5/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:41.020 [6/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:01:41.020 [7/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:41.020 [8/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:01:41.020 [9/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:41.020 [10/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:01:41.020 [11/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:41.278 [12/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:01:41.278 [13/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:41.279 [14/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:01:41.279 [15/36] Compiling C object test/unit_tests.p/mocks.c.o 00:01:41.279 [16/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:41.279 [17/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:41.279 [18/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:41.279 [19/36] Compiling C object samples/server.p/server.c.o 00:01:41.279 [20/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:41.279 [21/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:41.279 [22/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:41.279 [23/36] Compiling C object samples/client.p/client.c.o 00:01:41.279 [24/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:01:41.279 [25/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:41.279 [26/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:41.279 [27/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:41.279 [28/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:01:41.279 [29/36] Linking target samples/client 00:01:41.279 [30/36] Linking static target lib/libvfio-user.a 00:01:41.279 [31/36] Linking target test/unit_tests 00:01:41.279 [32/36] Linking target samples/null 00:01:41.279 [33/36] Linking target samples/lspci 00:01:41.279 [34/36] Linking target samples/gpio-pci-idio-16 00:01:41.279 [35/36] Linking target samples/server 00:01:41.279 [36/36] Linking target samples/shadow_ioeventfd_server 00:01:41.279 INFO: autodetecting backend as ninja 00:01:41.279 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:41.538 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:41.797 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:41.797 ninja: no work to do. 00:01:48.371 The Meson build system 00:01:48.371 Version: 1.3.1 00:01:48.371 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:01:48.371 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:01:48.371 Build type: native build 00:01:48.371 Program cat found: YES (/usr/bin/cat) 00:01:48.371 Project name: DPDK 00:01:48.371 Project version: 24.03.0 00:01:48.371 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:01:48.371 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:01:48.371 Host machine cpu family: x86_64 00:01:48.371 Host machine cpu: x86_64 00:01:48.371 Message: ## Building in Developer Mode ## 00:01:48.371 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:48.371 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:48.372 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:48.372 Program python3 found: YES (/usr/bin/python3) 00:01:48.372 Program cat found: YES (/usr/bin/cat) 00:01:48.372 Compiler for C supports arguments -march=native: YES 00:01:48.372 Checking for size of "void *" : 8 00:01:48.372 Checking for size of "void *" : 8 (cached) 00:01:48.372 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:48.372 Library m found: YES 00:01:48.372 Library numa found: YES 00:01:48.372 Has header "numaif.h" : YES 00:01:48.372 Library fdt found: NO 00:01:48.372 Library execinfo found: NO 00:01:48.372 Has header "execinfo.h" : YES 00:01:48.372 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:48.372 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:48.372 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:48.372 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:48.372 Run-time dependency openssl found: YES 3.0.9 00:01:48.372 Run-time dependency libpcap found: YES 1.10.4 00:01:48.372 Has header "pcap.h" with dependency libpcap: YES 00:01:48.372 Compiler for C supports arguments -Wcast-qual: YES 00:01:48.372 Compiler for C supports arguments -Wdeprecated: YES 00:01:48.372 Compiler for C supports arguments -Wformat: YES 00:01:48.372 Compiler for C supports arguments -Wformat-nonliteral: YES 00:01:48.372 Compiler for C supports arguments -Wformat-security: YES 00:01:48.372 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:48.372 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:48.372 Compiler for C supports arguments -Wnested-externs: YES 00:01:48.372 Compiler for C supports arguments -Wold-style-definition: YES 00:01:48.372 Compiler for C supports arguments -Wpointer-arith: YES 00:01:48.372 Compiler for C supports arguments -Wsign-compare: YES 00:01:48.372 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:48.372 Compiler for C supports arguments -Wundef: YES 00:01:48.372 Compiler for C supports arguments -Wwrite-strings: YES 00:01:48.372 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:48.372 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:01:48.372 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:48.372 Program objdump found: YES (/usr/bin/objdump) 00:01:48.372 Compiler for C supports arguments -mavx512f: YES 00:01:48.372 Checking if "AVX512 checking" compiles: YES 00:01:48.372 Fetching value of define "__SSE4_2__" : 1 00:01:48.372 Fetching value of define "__AES__" : 1 00:01:48.372 Fetching value of define "__AVX__" : 1 00:01:48.372 Fetching value of define "__AVX2__" : 1 00:01:48.372 Fetching value of define "__AVX512BW__" : 1 00:01:48.372 Fetching value of define "__AVX512CD__" : 1 00:01:48.372 Fetching value of define "__AVX512DQ__" : 1 00:01:48.372 Fetching value of define "__AVX512F__" : 1 00:01:48.372 Fetching value of define "__AVX512VL__" : 1 00:01:48.372 Fetching value of define "__PCLMUL__" : 1 00:01:48.372 Fetching value of define "__RDRND__" : 1 00:01:48.372 Fetching value of define "__RDSEED__" : 1 00:01:48.372 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:48.372 Fetching value of define "__znver1__" : (undefined) 00:01:48.372 Fetching value of define "__znver2__" : (undefined) 00:01:48.372 Fetching value of define "__znver3__" : (undefined) 00:01:48.372 Fetching value of define "__znver4__" : (undefined) 00:01:48.372 Compiler for C supports arguments -Wno-format-truncation: NO 00:01:48.372 Message: lib/log: Defining dependency "log" 00:01:48.372 Message: lib/kvargs: Defining dependency "kvargs" 00:01:48.372 Message: lib/telemetry: Defining dependency "telemetry" 00:01:48.372 Checking for function "getentropy" : NO 00:01:48.372 Message: lib/eal: Defining dependency "eal" 00:01:48.372 Message: lib/ring: Defining dependency "ring" 00:01:48.372 Message: lib/rcu: Defining dependency "rcu" 00:01:48.372 Message: lib/mempool: Defining dependency "mempool" 00:01:48.372 Message: lib/mbuf: Defining dependency "mbuf" 00:01:48.372 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:48.372 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:48.372 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:48.372 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:48.372 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:48.372 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:48.372 Compiler for C supports arguments -mpclmul: YES 00:01:48.372 Compiler for C supports arguments -maes: YES 00:01:48.372 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:48.372 Compiler for C supports arguments -mavx512bw: YES 00:01:48.372 Compiler for C supports arguments -mavx512dq: YES 00:01:48.372 Compiler for C supports arguments -mavx512vl: YES 00:01:48.372 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:48.372 Compiler for C supports arguments -mavx2: YES 00:01:48.372 Compiler for C supports arguments -mavx: YES 00:01:48.372 Message: lib/net: Defining dependency "net" 00:01:48.372 Message: lib/meter: Defining dependency "meter" 00:01:48.372 Message: lib/ethdev: Defining dependency "ethdev" 00:01:48.372 Message: lib/pci: Defining dependency "pci" 00:01:48.372 Message: lib/cmdline: Defining dependency "cmdline" 00:01:48.372 Message: lib/hash: Defining dependency "hash" 00:01:48.372 Message: lib/timer: Defining dependency "timer" 00:01:48.372 Message: lib/compressdev: Defining dependency "compressdev" 00:01:48.372 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:48.372 Message: lib/dmadev: Defining dependency "dmadev" 00:01:48.372 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:48.372 Message: lib/power: Defining dependency "power" 00:01:48.372 Message: lib/reorder: Defining dependency "reorder" 00:01:48.372 Message: lib/security: Defining dependency "security" 00:01:48.372 Has header "linux/userfaultfd.h" : YES 00:01:48.372 Has header "linux/vduse.h" : YES 00:01:48.372 Message: lib/vhost: Defining dependency "vhost" 00:01:48.372 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:01:48.372 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:48.372 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:48.372 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:48.372 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:48.372 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:48.372 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:48.372 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:48.372 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:48.372 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:48.372 Program doxygen found: YES (/usr/bin/doxygen) 00:01:48.372 Configuring doxy-api-html.conf using configuration 00:01:48.372 Configuring doxy-api-man.conf using configuration 00:01:48.372 Program mandb found: YES (/usr/bin/mandb) 00:01:48.372 Program sphinx-build found: NO 00:01:48.372 Configuring rte_build_config.h using configuration 00:01:48.372 Message: 00:01:48.372 ================= 00:01:48.372 Applications Enabled 00:01:48.372 ================= 00:01:48.372 00:01:48.372 apps: 00:01:48.372 00:01:48.372 00:01:48.372 Message: 00:01:48.372 ================= 00:01:48.372 Libraries Enabled 00:01:48.372 ================= 00:01:48.372 00:01:48.372 libs: 00:01:48.372 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:48.372 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:48.372 cryptodev, dmadev, power, reorder, security, vhost, 00:01:48.372 00:01:48.372 Message: 00:01:48.372 =============== 00:01:48.372 Drivers Enabled 00:01:48.372 =============== 00:01:48.372 00:01:48.372 common: 00:01:48.372 00:01:48.372 bus: 00:01:48.372 pci, vdev, 00:01:48.372 mempool: 00:01:48.372 ring, 00:01:48.372 dma: 00:01:48.372 00:01:48.372 net: 00:01:48.372 00:01:48.372 crypto: 00:01:48.372 00:01:48.372 compress: 00:01:48.372 00:01:48.372 vdpa: 00:01:48.372 00:01:48.372 00:01:48.372 Message: 00:01:48.372 ================= 00:01:48.372 Content Skipped 00:01:48.372 ================= 00:01:48.372 00:01:48.372 apps: 00:01:48.372 dumpcap: explicitly disabled via build config 00:01:48.372 graph: explicitly disabled via build config 00:01:48.372 pdump: explicitly disabled via build config 00:01:48.372 proc-info: explicitly disabled via build config 00:01:48.372 test-acl: explicitly disabled via build config 00:01:48.372 test-bbdev: explicitly disabled via build config 00:01:48.372 test-cmdline: explicitly disabled via build config 00:01:48.372 test-compress-perf: explicitly disabled via build config 00:01:48.372 test-crypto-perf: explicitly disabled via build config 00:01:48.372 test-dma-perf: explicitly disabled via build config 00:01:48.372 test-eventdev: explicitly disabled via build config 00:01:48.372 test-fib: explicitly disabled via build config 00:01:48.372 test-flow-perf: explicitly disabled via build config 00:01:48.372 test-gpudev: explicitly disabled via build config 00:01:48.372 test-mldev: explicitly disabled via build config 00:01:48.372 test-pipeline: explicitly disabled via build config 00:01:48.372 test-pmd: explicitly disabled via build config 00:01:48.372 test-regex: explicitly disabled via build config 00:01:48.372 test-sad: explicitly disabled via build config 00:01:48.372 test-security-perf: explicitly disabled via build config 00:01:48.372 00:01:48.372 libs: 00:01:48.372 argparse: explicitly disabled via build config 00:01:48.372 metrics: explicitly disabled via build config 00:01:48.372 acl: explicitly disabled via build config 00:01:48.372 bbdev: explicitly disabled via build config 00:01:48.372 bitratestats: explicitly disabled via build config 00:01:48.372 bpf: explicitly disabled via build config 00:01:48.372 cfgfile: explicitly disabled via build config 00:01:48.372 distributor: explicitly disabled via build config 00:01:48.372 efd: explicitly disabled via build config 00:01:48.372 eventdev: explicitly disabled via build config 00:01:48.372 dispatcher: explicitly disabled via build config 00:01:48.372 gpudev: explicitly disabled via build config 00:01:48.372 gro: explicitly disabled via build config 00:01:48.372 gso: explicitly disabled via build config 00:01:48.372 ip_frag: explicitly disabled via build config 00:01:48.372 jobstats: explicitly disabled via build config 00:01:48.372 latencystats: explicitly disabled via build config 00:01:48.372 lpm: explicitly disabled via build config 00:01:48.372 member: explicitly disabled via build config 00:01:48.372 pcapng: explicitly disabled via build config 00:01:48.372 rawdev: explicitly disabled via build config 00:01:48.372 regexdev: explicitly disabled via build config 00:01:48.372 mldev: explicitly disabled via build config 00:01:48.372 rib: explicitly disabled via build config 00:01:48.372 sched: explicitly disabled via build config 00:01:48.372 stack: explicitly disabled via build config 00:01:48.372 ipsec: explicitly disabled via build config 00:01:48.372 pdcp: explicitly disabled via build config 00:01:48.372 fib: explicitly disabled via build config 00:01:48.372 port: explicitly disabled via build config 00:01:48.373 pdump: explicitly disabled via build config 00:01:48.373 table: explicitly disabled via build config 00:01:48.373 pipeline: explicitly disabled via build config 00:01:48.373 graph: explicitly disabled via build config 00:01:48.373 node: explicitly disabled via build config 00:01:48.373 00:01:48.373 drivers: 00:01:48.373 common/cpt: not in enabled drivers build config 00:01:48.373 common/dpaax: not in enabled drivers build config 00:01:48.373 common/iavf: not in enabled drivers build config 00:01:48.373 common/idpf: not in enabled drivers build config 00:01:48.373 common/ionic: not in enabled drivers build config 00:01:48.373 common/mvep: not in enabled drivers build config 00:01:48.373 common/octeontx: not in enabled drivers build config 00:01:48.373 bus/auxiliary: not in enabled drivers build config 00:01:48.373 bus/cdx: not in enabled drivers build config 00:01:48.373 bus/dpaa: not in enabled drivers build config 00:01:48.373 bus/fslmc: not in enabled drivers build config 00:01:48.373 bus/ifpga: not in enabled drivers build config 00:01:48.373 bus/platform: not in enabled drivers build config 00:01:48.373 bus/uacce: not in enabled drivers build config 00:01:48.373 bus/vmbus: not in enabled drivers build config 00:01:48.373 common/cnxk: not in enabled drivers build config 00:01:48.373 common/mlx5: not in enabled drivers build config 00:01:48.373 common/nfp: not in enabled drivers build config 00:01:48.373 common/nitrox: not in enabled drivers build config 00:01:48.373 common/qat: not in enabled drivers build config 00:01:48.373 common/sfc_efx: not in enabled drivers build config 00:01:48.373 mempool/bucket: not in enabled drivers build config 00:01:48.373 mempool/cnxk: not in enabled drivers build config 00:01:48.373 mempool/dpaa: not in enabled drivers build config 00:01:48.373 mempool/dpaa2: not in enabled drivers build config 00:01:48.373 mempool/octeontx: not in enabled drivers build config 00:01:48.373 mempool/stack: not in enabled drivers build config 00:01:48.373 dma/cnxk: not in enabled drivers build config 00:01:48.373 dma/dpaa: not in enabled drivers build config 00:01:48.373 dma/dpaa2: not in enabled drivers build config 00:01:48.373 dma/hisilicon: not in enabled drivers build config 00:01:48.373 dma/idxd: not in enabled drivers build config 00:01:48.373 dma/ioat: not in enabled drivers build config 00:01:48.373 dma/skeleton: not in enabled drivers build config 00:01:48.373 net/af_packet: not in enabled drivers build config 00:01:48.373 net/af_xdp: not in enabled drivers build config 00:01:48.373 net/ark: not in enabled drivers build config 00:01:48.373 net/atlantic: not in enabled drivers build config 00:01:48.373 net/avp: not in enabled drivers build config 00:01:48.373 net/axgbe: not in enabled drivers build config 00:01:48.373 net/bnx2x: not in enabled drivers build config 00:01:48.373 net/bnxt: not in enabled drivers build config 00:01:48.373 net/bonding: not in enabled drivers build config 00:01:48.373 net/cnxk: not in enabled drivers build config 00:01:48.373 net/cpfl: not in enabled drivers build config 00:01:48.373 net/cxgbe: not in enabled drivers build config 00:01:48.373 net/dpaa: not in enabled drivers build config 00:01:48.373 net/dpaa2: not in enabled drivers build config 00:01:48.373 net/e1000: not in enabled drivers build config 00:01:48.373 net/ena: not in enabled drivers build config 00:01:48.373 net/enetc: not in enabled drivers build config 00:01:48.373 net/enetfec: not in enabled drivers build config 00:01:48.373 net/enic: not in enabled drivers build config 00:01:48.373 net/failsafe: not in enabled drivers build config 00:01:48.373 net/fm10k: not in enabled drivers build config 00:01:48.373 net/gve: not in enabled drivers build config 00:01:48.373 net/hinic: not in enabled drivers build config 00:01:48.373 net/hns3: not in enabled drivers build config 00:01:48.373 net/i40e: not in enabled drivers build config 00:01:48.373 net/iavf: not in enabled drivers build config 00:01:48.373 net/ice: not in enabled drivers build config 00:01:48.373 net/idpf: not in enabled drivers build config 00:01:48.373 net/igc: not in enabled drivers build config 00:01:48.373 net/ionic: not in enabled drivers build config 00:01:48.373 net/ipn3ke: not in enabled drivers build config 00:01:48.373 net/ixgbe: not in enabled drivers build config 00:01:48.373 net/mana: not in enabled drivers build config 00:01:48.373 net/memif: not in enabled drivers build config 00:01:48.373 net/mlx4: not in enabled drivers build config 00:01:48.373 net/mlx5: not in enabled drivers build config 00:01:48.373 net/mvneta: not in enabled drivers build config 00:01:48.373 net/mvpp2: not in enabled drivers build config 00:01:48.373 net/netvsc: not in enabled drivers build config 00:01:48.373 net/nfb: not in enabled drivers build config 00:01:48.373 net/nfp: not in enabled drivers build config 00:01:48.373 net/ngbe: not in enabled drivers build config 00:01:48.373 net/null: not in enabled drivers build config 00:01:48.373 net/octeontx: not in enabled drivers build config 00:01:48.373 net/octeon_ep: not in enabled drivers build config 00:01:48.373 net/pcap: not in enabled drivers build config 00:01:48.373 net/pfe: not in enabled drivers build config 00:01:48.373 net/qede: not in enabled drivers build config 00:01:48.373 net/ring: not in enabled drivers build config 00:01:48.373 net/sfc: not in enabled drivers build config 00:01:48.373 net/softnic: not in enabled drivers build config 00:01:48.373 net/tap: not in enabled drivers build config 00:01:48.373 net/thunderx: not in enabled drivers build config 00:01:48.373 net/txgbe: not in enabled drivers build config 00:01:48.373 net/vdev_netvsc: not in enabled drivers build config 00:01:48.373 net/vhost: not in enabled drivers build config 00:01:48.373 net/virtio: not in enabled drivers build config 00:01:48.373 net/vmxnet3: not in enabled drivers build config 00:01:48.373 raw/*: missing internal dependency, "rawdev" 00:01:48.373 crypto/armv8: not in enabled drivers build config 00:01:48.373 crypto/bcmfs: not in enabled drivers build config 00:01:48.373 crypto/caam_jr: not in enabled drivers build config 00:01:48.373 crypto/ccp: not in enabled drivers build config 00:01:48.373 crypto/cnxk: not in enabled drivers build config 00:01:48.373 crypto/dpaa_sec: not in enabled drivers build config 00:01:48.373 crypto/dpaa2_sec: not in enabled drivers build config 00:01:48.373 crypto/ipsec_mb: not in enabled drivers build config 00:01:48.373 crypto/mlx5: not in enabled drivers build config 00:01:48.373 crypto/mvsam: not in enabled drivers build config 00:01:48.373 crypto/nitrox: not in enabled drivers build config 00:01:48.373 crypto/null: not in enabled drivers build config 00:01:48.373 crypto/octeontx: not in enabled drivers build config 00:01:48.373 crypto/openssl: not in enabled drivers build config 00:01:48.373 crypto/scheduler: not in enabled drivers build config 00:01:48.373 crypto/uadk: not in enabled drivers build config 00:01:48.373 crypto/virtio: not in enabled drivers build config 00:01:48.373 compress/isal: not in enabled drivers build config 00:01:48.373 compress/mlx5: not in enabled drivers build config 00:01:48.373 compress/nitrox: not in enabled drivers build config 00:01:48.373 compress/octeontx: not in enabled drivers build config 00:01:48.373 compress/zlib: not in enabled drivers build config 00:01:48.373 regex/*: missing internal dependency, "regexdev" 00:01:48.373 ml/*: missing internal dependency, "mldev" 00:01:48.373 vdpa/ifc: not in enabled drivers build config 00:01:48.373 vdpa/mlx5: not in enabled drivers build config 00:01:48.373 vdpa/nfp: not in enabled drivers build config 00:01:48.373 vdpa/sfc: not in enabled drivers build config 00:01:48.373 event/*: missing internal dependency, "eventdev" 00:01:48.373 baseband/*: missing internal dependency, "bbdev" 00:01:48.373 gpu/*: missing internal dependency, "gpudev" 00:01:48.373 00:01:48.373 00:01:48.373 Build targets in project: 85 00:01:48.373 00:01:48.373 DPDK 24.03.0 00:01:48.373 00:01:48.373 User defined options 00:01:48.373 buildtype : debug 00:01:48.373 default_library : static 00:01:48.373 libdir : lib 00:01:48.373 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:48.373 c_args : -fPIC -Werror 00:01:48.373 c_link_args : 00:01:48.373 cpu_instruction_set: native 00:01:48.373 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:48.373 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:48.373 enable_docs : false 00:01:48.373 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:48.373 enable_kmods : false 00:01:48.373 max_lcores : 128 00:01:48.373 tests : false 00:01:48.373 00:01:48.373 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:48.373 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:01:48.373 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:48.373 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:48.373 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:48.373 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:48.373 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:48.373 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:48.373 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:48.373 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:48.373 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:48.373 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:48.373 [11/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:48.373 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:48.373 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:48.373 [14/268] Linking static target lib/librte_kvargs.a 00:01:48.373 [15/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:48.374 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:48.374 [17/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:48.374 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:48.374 [19/268] Linking static target lib/librte_log.a 00:01:48.944 [20/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.944 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:48.944 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:48.944 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:48.944 [24/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:48.944 [25/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:48.944 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:48.944 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:48.944 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:48.944 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:48.944 [30/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:48.944 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:48.944 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:48.944 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:48.944 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:48.944 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:48.944 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:48.944 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:48.944 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:48.944 [39/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:48.944 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:48.944 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:48.944 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:48.944 [43/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:48.944 [44/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:48.944 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:48.944 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:48.944 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:48.944 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:48.944 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:48.944 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:48.944 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:48.944 [52/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:48.944 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:48.944 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:48.944 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:48.944 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:48.944 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:48.944 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:48.944 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:48.944 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:48.944 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:48.944 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:48.944 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:48.944 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:48.944 [65/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:48.944 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:48.944 [67/268] Linking static target lib/librte_telemetry.a 00:01:48.944 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:48.944 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:48.944 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:48.944 [71/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:48.944 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:48.944 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:48.944 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:48.944 [75/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:48.944 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:48.944 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:48.944 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:48.944 [79/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:48.944 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:48.944 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:48.944 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:48.944 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:48.944 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:48.944 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:48.944 [86/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:48.944 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:48.944 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:48.944 [89/268] Linking static target lib/librte_pci.a 00:01:48.944 [90/268] Linking static target lib/librte_ring.a 00:01:48.944 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:48.944 [92/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:48.944 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:48.944 [94/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:48.944 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:48.944 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:48.944 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:48.944 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:48.944 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:48.944 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:48.944 [101/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:48.944 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:48.944 [103/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:48.944 [104/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:48.944 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:48.944 [106/268] Linking static target lib/librte_eal.a 00:01:49.203 [107/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:49.203 [108/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:49.203 [109/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:49.203 [110/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:49.203 [111/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.203 [112/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:49.203 [113/268] Linking static target lib/librte_rcu.a 00:01:49.203 [114/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:49.203 [115/268] Linking target lib/librte_log.so.24.1 00:01:49.203 [116/268] Linking static target lib/librte_mempool.a 00:01:49.203 [117/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:49.203 [118/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.203 [119/268] Linking static target lib/librte_mbuf.a 00:01:49.203 [120/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.462 [121/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:49.463 [122/268] Linking static target lib/librte_net.a 00:01:49.463 [123/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:49.463 [124/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:49.463 [125/268] Linking target lib/librte_kvargs.so.24.1 00:01:49.463 [126/268] Linking static target lib/librte_meter.a 00:01:49.463 [127/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:49.463 [128/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:49.463 [129/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.463 [130/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:49.463 [131/268] Linking static target lib/librte_timer.a 00:01:49.463 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:49.463 [133/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.463 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:49.463 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:49.463 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:49.463 [137/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:49.463 [138/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:49.463 [139/268] Linking target lib/librte_telemetry.so.24.1 00:01:49.463 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:49.463 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:49.463 [142/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:49.463 [143/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:49.463 [144/268] Linking static target lib/librte_cmdline.a 00:01:49.463 [145/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:49.463 [146/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:49.463 [147/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:49.463 [148/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:49.463 [149/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:49.463 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:49.463 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:49.463 [152/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:49.463 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:49.463 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:49.463 [155/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:49.463 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:49.463 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:49.722 [158/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:49.722 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:49.722 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:49.722 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:49.722 [162/268] Linking static target lib/librte_compressdev.a 00:01:49.722 [163/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:49.722 [164/268] Linking static target lib/librte_dmadev.a 00:01:49.722 [165/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:49.722 [166/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:49.722 [167/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:49.722 [168/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:49.722 [169/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:49.722 [170/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:49.722 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:49.722 [172/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:49.722 [173/268] Linking static target lib/librte_security.a 00:01:49.722 [174/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.722 [175/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:49.722 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:49.722 [177/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:49.722 [178/268] Linking static target lib/librte_power.a 00:01:49.722 [179/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:49.722 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:49.722 [181/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.722 [182/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:49.722 [183/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:49.722 [184/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:49.722 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:49.722 [186/268] Linking static target lib/librte_reorder.a 00:01:49.722 [187/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:49.722 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:49.722 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:49.722 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:49.722 [191/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:49.722 [192/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:49.722 [193/268] Linking static target lib/librte_hash.a 00:01:49.722 [194/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:49.722 [195/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:49.722 [196/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:49.982 [197/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:49.982 [198/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:49.982 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:49.982 [200/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:49.982 [201/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:49.982 [202/268] Linking static target lib/librte_cryptodev.a 00:01:49.982 [203/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:49.982 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:49.982 [205/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:49.982 [206/268] Linking static target drivers/librte_bus_vdev.a 00:01:49.982 [207/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.982 [208/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.982 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:49.982 [210/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:49.982 [211/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:49.982 [212/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:49.982 [213/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.982 [214/268] Linking static target drivers/librte_bus_pci.a 00:01:49.982 [215/268] Linking static target drivers/librte_mempool_ring.a 00:01:49.982 [216/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:50.241 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.241 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.241 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:50.241 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.241 [221/268] Linking static target lib/librte_ethdev.a 00:01:50.241 [222/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.562 [223/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.862 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.862 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:50.862 [226/268] Linking static target lib/librte_vhost.a 00:01:50.862 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.862 [228/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.862 [229/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.240 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.176 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.297 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.240 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.499 [234/268] Linking target lib/librte_eal.so.24.1 00:02:02.499 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:02.499 [236/268] Linking target lib/librte_timer.so.24.1 00:02:02.757 [237/268] Linking target lib/librte_ring.so.24.1 00:02:02.757 [238/268] Linking target lib/librte_meter.so.24.1 00:02:02.757 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:02.757 [240/268] Linking target lib/librte_pci.so.24.1 00:02:02.757 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:02.757 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:02.757 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:02.757 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:02.757 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:02.757 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:02.757 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:02.757 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:02.757 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:03.016 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:03.016 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:03.016 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:03.016 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:03.275 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:03.275 [255/268] Linking target lib/librte_net.so.24.1 00:02:03.275 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:03.275 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:02:03.275 [258/268] Linking target lib/librte_reorder.so.24.1 00:02:03.275 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:03.534 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:03.534 [261/268] Linking target lib/librte_hash.so.24.1 00:02:03.534 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:03.534 [263/268] Linking target lib/librte_ethdev.so.24.1 00:02:03.534 [264/268] Linking target lib/librte_security.so.24.1 00:02:03.534 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:03.534 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:03.793 [267/268] Linking target lib/librte_power.so.24.1 00:02:03.793 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:03.793 INFO: autodetecting backend as ninja 00:02:03.793 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 72 00:02:04.746 CC lib/ut/ut.o 00:02:04.746 CC lib/ut_mock/mock.o 00:02:04.746 CC lib/log/log.o 00:02:04.746 CC lib/log/log_flags.o 00:02:04.746 CC lib/log/log_deprecated.o 00:02:04.747 LIB libspdk_ut.a 00:02:04.747 LIB libspdk_ut_mock.a 00:02:04.747 LIB libspdk_log.a 00:02:05.315 CC lib/ioat/ioat.o 00:02:05.315 CC lib/dma/dma.o 00:02:05.315 CC lib/util/base64.o 00:02:05.315 CC lib/util/bit_array.o 00:02:05.315 CC lib/util/crc16.o 00:02:05.315 CC lib/util/cpuset.o 00:02:05.315 CC lib/util/crc32.o 00:02:05.315 CC lib/util/crc32c.o 00:02:05.315 CC lib/util/crc32_ieee.o 00:02:05.315 CC lib/util/crc64.o 00:02:05.315 CXX lib/trace_parser/trace.o 00:02:05.315 CC lib/util/dif.o 00:02:05.315 CC lib/util/fd.o 00:02:05.315 CC lib/util/file.o 00:02:05.315 CC lib/util/hexlify.o 00:02:05.315 CC lib/util/iov.o 00:02:05.315 CC lib/util/math.o 00:02:05.315 CC lib/util/pipe.o 00:02:05.315 CC lib/util/strerror_tls.o 00:02:05.315 CC lib/util/string.o 00:02:05.315 CC lib/util/uuid.o 00:02:05.315 CC lib/util/fd_group.o 00:02:05.315 CC lib/util/xor.o 00:02:05.315 CC lib/util/zipf.o 00:02:05.315 LIB libspdk_dma.a 00:02:05.315 CC lib/vfio_user/host/vfio_user_pci.o 00:02:05.315 CC lib/vfio_user/host/vfio_user.o 00:02:05.315 LIB libspdk_ioat.a 00:02:05.574 LIB libspdk_vfio_user.a 00:02:05.574 LIB libspdk_util.a 00:02:05.574 LIB libspdk_trace_parser.a 00:02:05.833 CC lib/idxd/idxd.o 00:02:05.833 CC lib/idxd/idxd_kernel.o 00:02:05.833 CC lib/idxd/idxd_user.o 00:02:05.833 CC lib/conf/conf.o 00:02:05.833 CC lib/rdma_utils/rdma_utils.o 00:02:05.833 CC lib/rdma_provider/common.o 00:02:05.833 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:05.833 CC lib/vmd/vmd.o 00:02:05.833 CC lib/json/json_parse.o 00:02:05.833 CC lib/json/json_util.o 00:02:05.833 CC lib/vmd/led.o 00:02:05.833 CC lib/json/json_write.o 00:02:05.833 CC lib/env_dpdk/env.o 00:02:05.833 CC lib/env_dpdk/memory.o 00:02:05.833 CC lib/env_dpdk/pci.o 00:02:05.833 CC lib/env_dpdk/init.o 00:02:05.833 CC lib/env_dpdk/threads.o 00:02:05.833 CC lib/env_dpdk/pci_virtio.o 00:02:05.833 CC lib/env_dpdk/pci_ioat.o 00:02:05.833 CC lib/env_dpdk/pci_vmd.o 00:02:05.833 CC lib/env_dpdk/pci_idxd.o 00:02:05.833 CC lib/env_dpdk/pci_event.o 00:02:05.833 CC lib/env_dpdk/sigbus_handler.o 00:02:05.833 CC lib/env_dpdk/pci_dpdk.o 00:02:05.833 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:05.833 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:06.092 LIB libspdk_rdma_provider.a 00:02:06.092 LIB libspdk_conf.a 00:02:06.092 LIB libspdk_rdma_utils.a 00:02:06.092 LIB libspdk_json.a 00:02:06.351 LIB libspdk_idxd.a 00:02:06.351 LIB libspdk_vmd.a 00:02:06.351 CC lib/jsonrpc/jsonrpc_server.o 00:02:06.351 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:06.351 CC lib/jsonrpc/jsonrpc_client.o 00:02:06.351 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:06.610 LIB libspdk_jsonrpc.a 00:02:06.870 LIB libspdk_env_dpdk.a 00:02:06.870 CC lib/rpc/rpc.o 00:02:07.129 LIB libspdk_rpc.a 00:02:07.389 CC lib/keyring/keyring.o 00:02:07.389 CC lib/keyring/keyring_rpc.o 00:02:07.389 CC lib/notify/notify.o 00:02:07.389 CC lib/notify/notify_rpc.o 00:02:07.389 CC lib/trace/trace.o 00:02:07.389 CC lib/trace/trace_flags.o 00:02:07.389 CC lib/trace/trace_rpc.o 00:02:07.648 LIB libspdk_notify.a 00:02:07.648 LIB libspdk_keyring.a 00:02:07.648 LIB libspdk_trace.a 00:02:07.908 CC lib/thread/thread.o 00:02:07.908 CC lib/sock/sock.o 00:02:07.908 CC lib/thread/iobuf.o 00:02:07.908 CC lib/sock/sock_rpc.o 00:02:08.168 LIB libspdk_sock.a 00:02:08.736 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:08.736 CC lib/nvme/nvme_ctrlr.o 00:02:08.736 CC lib/nvme/nvme_fabric.o 00:02:08.736 CC lib/nvme/nvme_ns_cmd.o 00:02:08.736 CC lib/nvme/nvme_ns.o 00:02:08.736 CC lib/nvme/nvme_pcie_common.o 00:02:08.736 CC lib/nvme/nvme_pcie.o 00:02:08.736 CC lib/nvme/nvme_qpair.o 00:02:08.736 CC lib/nvme/nvme.o 00:02:08.736 CC lib/nvme/nvme_quirks.o 00:02:08.736 CC lib/nvme/nvme_transport.o 00:02:08.736 CC lib/nvme/nvme_discovery.o 00:02:08.736 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:08.736 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:08.736 CC lib/nvme/nvme_tcp.o 00:02:08.736 CC lib/nvme/nvme_opal.o 00:02:08.736 CC lib/nvme/nvme_io_msg.o 00:02:08.736 CC lib/nvme/nvme_poll_group.o 00:02:08.736 CC lib/nvme/nvme_zns.o 00:02:08.736 CC lib/nvme/nvme_stubs.o 00:02:08.736 CC lib/nvme/nvme_auth.o 00:02:08.736 CC lib/nvme/nvme_cuse.o 00:02:08.736 CC lib/nvme/nvme_vfio_user.o 00:02:08.736 CC lib/nvme/nvme_rdma.o 00:02:08.736 LIB libspdk_thread.a 00:02:08.994 CC lib/virtio/virtio_vhost_user.o 00:02:08.994 CC lib/virtio/virtio.o 00:02:08.994 CC lib/virtio/virtio_pci.o 00:02:08.994 CC lib/virtio/virtio_vfio_user.o 00:02:08.994 CC lib/init/json_config.o 00:02:08.994 CC lib/init/subsystem.o 00:02:08.994 CC lib/init/subsystem_rpc.o 00:02:08.994 CC lib/init/rpc.o 00:02:08.994 CC lib/accel/accel.o 00:02:08.994 CC lib/accel/accel_sw.o 00:02:08.994 CC lib/accel/accel_rpc.o 00:02:08.994 CC lib/vfu_tgt/tgt_endpoint.o 00:02:08.994 CC lib/vfu_tgt/tgt_rpc.o 00:02:08.994 CC lib/blob/blobstore.o 00:02:08.994 CC lib/blob/request.o 00:02:08.994 CC lib/blob/zeroes.o 00:02:08.994 CC lib/blob/blob_bs_dev.o 00:02:09.252 LIB libspdk_init.a 00:02:09.252 LIB libspdk_virtio.a 00:02:09.252 LIB libspdk_vfu_tgt.a 00:02:09.511 CC lib/event/app.o 00:02:09.511 CC lib/event/reactor.o 00:02:09.511 CC lib/event/log_rpc.o 00:02:09.511 CC lib/event/app_rpc.o 00:02:09.511 CC lib/event/scheduler_static.o 00:02:09.771 LIB libspdk_accel.a 00:02:09.771 LIB libspdk_event.a 00:02:10.031 LIB libspdk_nvme.a 00:02:10.031 CC lib/bdev/bdev.o 00:02:10.031 CC lib/bdev/bdev_rpc.o 00:02:10.031 CC lib/bdev/bdev_zone.o 00:02:10.031 CC lib/bdev/part.o 00:02:10.031 CC lib/bdev/scsi_nvme.o 00:02:10.970 LIB libspdk_blob.a 00:02:11.228 CC lib/blobfs/blobfs.o 00:02:11.228 CC lib/blobfs/tree.o 00:02:11.228 CC lib/lvol/lvol.o 00:02:11.797 LIB libspdk_lvol.a 00:02:11.797 LIB libspdk_blobfs.a 00:02:11.797 LIB libspdk_bdev.a 00:02:12.057 CC lib/nvmf/ctrlr.o 00:02:12.057 CC lib/scsi/dev.o 00:02:12.057 CC lib/ublk/ublk.o 00:02:12.057 CC lib/nvmf/ctrlr_discovery.o 00:02:12.057 CC lib/scsi/lun.o 00:02:12.057 CC lib/ublk/ublk_rpc.o 00:02:12.057 CC lib/nbd/nbd.o 00:02:12.057 CC lib/nvmf/ctrlr_bdev.o 00:02:12.057 CC lib/scsi/port.o 00:02:12.057 CC lib/scsi/scsi.o 00:02:12.057 CC lib/nvmf/subsystem.o 00:02:12.057 CC lib/nbd/nbd_rpc.o 00:02:12.057 CC lib/ftl/ftl_core.o 00:02:12.057 CC lib/nvmf/nvmf.o 00:02:12.057 CC lib/ftl/ftl_init.o 00:02:12.057 CC lib/ftl/ftl_debug.o 00:02:12.057 CC lib/scsi/scsi_bdev.o 00:02:12.057 CC lib/nvmf/nvmf_rpc.o 00:02:12.057 CC lib/ftl/ftl_layout.o 00:02:12.057 CC lib/nvmf/transport.o 00:02:12.057 CC lib/scsi/scsi_pr.o 00:02:12.057 CC lib/scsi/scsi_rpc.o 00:02:12.057 CC lib/ftl/ftl_io.o 00:02:12.057 CC lib/nvmf/tcp.o 00:02:12.057 CC lib/scsi/task.o 00:02:12.057 CC lib/nvmf/mdns_server.o 00:02:12.057 CC lib/ftl/ftl_sb.o 00:02:12.057 CC lib/nvmf/stubs.o 00:02:12.057 CC lib/ftl/ftl_l2p.o 00:02:12.057 CC lib/ftl/ftl_nv_cache.o 00:02:12.057 CC lib/nvmf/vfio_user.o 00:02:12.057 CC lib/ftl/ftl_l2p_flat.o 00:02:12.057 CC lib/nvmf/rdma.o 00:02:12.057 CC lib/ftl/ftl_band.o 00:02:12.057 CC lib/nvmf/auth.o 00:02:12.057 CC lib/ftl/ftl_band_ops.o 00:02:12.057 CC lib/ftl/ftl_writer.o 00:02:12.057 CC lib/ftl/ftl_reloc.o 00:02:12.318 CC lib/ftl/ftl_rq.o 00:02:12.318 CC lib/ftl/ftl_l2p_cache.o 00:02:12.318 CC lib/ftl/ftl_p2l.o 00:02:12.318 CC lib/ftl/mngt/ftl_mngt.o 00:02:12.318 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:12.318 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:12.318 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:12.318 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:12.318 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:12.318 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:12.318 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:12.318 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:12.318 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:12.318 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:12.318 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:12.318 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:12.318 CC lib/ftl/utils/ftl_conf.o 00:02:12.318 CC lib/ftl/utils/ftl_md.o 00:02:12.318 CC lib/ftl/utils/ftl_mempool.o 00:02:12.318 CC lib/ftl/utils/ftl_property.o 00:02:12.319 CC lib/ftl/utils/ftl_bitmap.o 00:02:12.319 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:12.319 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:12.319 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:12.319 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:12.319 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:12.319 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:12.319 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:12.319 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:12.319 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:12.319 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:12.319 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:12.319 CC lib/ftl/base/ftl_base_dev.o 00:02:12.319 CC lib/ftl/base/ftl_base_bdev.o 00:02:12.319 CC lib/ftl/ftl_trace.o 00:02:12.577 LIB libspdk_nbd.a 00:02:12.577 LIB libspdk_scsi.a 00:02:12.836 LIB libspdk_ublk.a 00:02:13.093 LIB libspdk_ftl.a 00:02:13.093 CC lib/vhost/vhost.o 00:02:13.093 CC lib/vhost/vhost_rpc.o 00:02:13.093 CC lib/vhost/vhost_scsi.o 00:02:13.093 CC lib/vhost/vhost_blk.o 00:02:13.093 CC lib/vhost/rte_vhost_user.o 00:02:13.093 CC lib/iscsi/conn.o 00:02:13.093 CC lib/iscsi/init_grp.o 00:02:13.093 CC lib/iscsi/iscsi.o 00:02:13.093 CC lib/iscsi/md5.o 00:02:13.093 CC lib/iscsi/param.o 00:02:13.093 CC lib/iscsi/portal_grp.o 00:02:13.093 CC lib/iscsi/tgt_node.o 00:02:13.093 CC lib/iscsi/iscsi_subsystem.o 00:02:13.093 CC lib/iscsi/task.o 00:02:13.093 CC lib/iscsi/iscsi_rpc.o 00:02:13.660 LIB libspdk_nvmf.a 00:02:13.660 LIB libspdk_vhost.a 00:02:13.919 LIB libspdk_iscsi.a 00:02:14.486 CC module/vfu_device/vfu_virtio.o 00:02:14.486 CC module/vfu_device/vfu_virtio_blk.o 00:02:14.486 CC module/vfu_device/vfu_virtio_scsi.o 00:02:14.486 CC module/vfu_device/vfu_virtio_rpc.o 00:02:14.486 CC module/env_dpdk/env_dpdk_rpc.o 00:02:14.486 CC module/sock/posix/posix.o 00:02:14.486 CC module/scheduler/gscheduler/gscheduler.o 00:02:14.486 CC module/accel/ioat/accel_ioat.o 00:02:14.486 LIB libspdk_env_dpdk_rpc.a 00:02:14.486 CC module/accel/ioat/accel_ioat_rpc.o 00:02:14.486 CC module/accel/error/accel_error.o 00:02:14.486 CC module/accel/error/accel_error_rpc.o 00:02:14.486 CC module/keyring/file/keyring.o 00:02:14.486 CC module/keyring/linux/keyring_rpc.o 00:02:14.486 CC module/keyring/file/keyring_rpc.o 00:02:14.486 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:14.486 CC module/keyring/linux/keyring.o 00:02:14.486 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:14.486 CC module/blob/bdev/blob_bdev.o 00:02:14.486 CC module/accel/dsa/accel_dsa.o 00:02:14.486 CC module/accel/dsa/accel_dsa_rpc.o 00:02:14.486 CC module/accel/iaa/accel_iaa.o 00:02:14.486 CC module/accel/iaa/accel_iaa_rpc.o 00:02:14.486 LIB libspdk_scheduler_gscheduler.a 00:02:14.486 LIB libspdk_keyring_file.a 00:02:14.486 LIB libspdk_keyring_linux.a 00:02:14.486 LIB libspdk_scheduler_dpdk_governor.a 00:02:14.486 LIB libspdk_accel_error.a 00:02:14.486 LIB libspdk_accel_ioat.a 00:02:14.486 LIB libspdk_scheduler_dynamic.a 00:02:14.744 LIB libspdk_accel_iaa.a 00:02:14.745 LIB libspdk_blob_bdev.a 00:02:14.745 LIB libspdk_accel_dsa.a 00:02:14.745 LIB libspdk_vfu_device.a 00:02:15.003 LIB libspdk_sock_posix.a 00:02:15.003 CC module/bdev/gpt/gpt.o 00:02:15.003 CC module/bdev/gpt/vbdev_gpt.o 00:02:15.003 CC module/blobfs/bdev/blobfs_bdev.o 00:02:15.003 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:15.003 CC module/bdev/lvol/vbdev_lvol.o 00:02:15.003 CC module/bdev/delay/vbdev_delay.o 00:02:15.003 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:15.003 CC module/bdev/raid/bdev_raid.o 00:02:15.003 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:15.003 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:15.003 CC module/bdev/raid/bdev_raid_rpc.o 00:02:15.003 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:15.003 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:15.003 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:15.003 CC module/bdev/malloc/bdev_malloc.o 00:02:15.003 CC module/bdev/raid/bdev_raid_sb.o 00:02:15.003 CC module/bdev/raid/raid1.o 00:02:15.003 CC module/bdev/raid/raid0.o 00:02:15.003 CC module/bdev/raid/concat.o 00:02:15.003 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:15.003 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:15.003 CC module/bdev/passthru/vbdev_passthru.o 00:02:15.003 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:15.003 CC module/bdev/error/vbdev_error.o 00:02:15.003 CC module/bdev/error/vbdev_error_rpc.o 00:02:15.003 CC module/bdev/null/bdev_null_rpc.o 00:02:15.003 CC module/bdev/null/bdev_null.o 00:02:15.003 CC module/bdev/split/vbdev_split.o 00:02:15.003 CC module/bdev/iscsi/bdev_iscsi.o 00:02:15.003 CC module/bdev/split/vbdev_split_rpc.o 00:02:15.003 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:15.003 CC module/bdev/ftl/bdev_ftl.o 00:02:15.003 CC module/bdev/nvme/bdev_nvme.o 00:02:15.003 CC module/bdev/aio/bdev_aio.o 00:02:15.003 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:15.003 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:15.003 CC module/bdev/nvme/vbdev_opal.o 00:02:15.003 CC module/bdev/nvme/bdev_mdns_client.o 00:02:15.003 CC module/bdev/nvme/nvme_rpc.o 00:02:15.003 CC module/bdev/aio/bdev_aio_rpc.o 00:02:15.003 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:15.003 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:15.262 LIB libspdk_blobfs_bdev.a 00:02:15.262 LIB libspdk_bdev_gpt.a 00:02:15.262 LIB libspdk_bdev_null.a 00:02:15.262 LIB libspdk_bdev_error.a 00:02:15.262 LIB libspdk_bdev_passthru.a 00:02:15.262 LIB libspdk_bdev_aio.a 00:02:15.262 LIB libspdk_bdev_zone_block.a 00:02:15.262 LIB libspdk_bdev_iscsi.a 00:02:15.262 LIB libspdk_bdev_ftl.a 00:02:15.262 LIB libspdk_bdev_split.a 00:02:15.262 LIB libspdk_bdev_malloc.a 00:02:15.521 LIB libspdk_bdev_lvol.a 00:02:15.521 LIB libspdk_bdev_delay.a 00:02:15.521 LIB libspdk_bdev_virtio.a 00:02:15.780 LIB libspdk_bdev_raid.a 00:02:16.348 LIB libspdk_bdev_nvme.a 00:02:17.287 CC module/event/subsystems/keyring/keyring.o 00:02:17.287 CC module/event/subsystems/vmd/vmd.o 00:02:17.287 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:17.287 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:17.287 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:17.287 CC module/event/subsystems/scheduler/scheduler.o 00:02:17.287 CC module/event/subsystems/sock/sock.o 00:02:17.287 CC module/event/subsystems/iobuf/iobuf.o 00:02:17.287 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:17.287 LIB libspdk_event_keyring.a 00:02:17.287 LIB libspdk_event_vfu_tgt.a 00:02:17.287 LIB libspdk_event_vmd.a 00:02:17.287 LIB libspdk_event_scheduler.a 00:02:17.287 LIB libspdk_event_sock.a 00:02:17.287 LIB libspdk_event_vhost_blk.a 00:02:17.287 LIB libspdk_event_iobuf.a 00:02:17.546 CC module/event/subsystems/accel/accel.o 00:02:17.546 LIB libspdk_event_accel.a 00:02:18.116 CC module/event/subsystems/bdev/bdev.o 00:02:18.116 LIB libspdk_event_bdev.a 00:02:18.375 CC module/event/subsystems/scsi/scsi.o 00:02:18.375 CC module/event/subsystems/ublk/ublk.o 00:02:18.375 CC module/event/subsystems/nbd/nbd.o 00:02:18.375 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:18.375 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:18.635 LIB libspdk_event_nbd.a 00:02:18.635 LIB libspdk_event_ublk.a 00:02:18.635 LIB libspdk_event_scsi.a 00:02:18.635 LIB libspdk_event_nvmf.a 00:02:18.894 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:18.894 CC module/event/subsystems/iscsi/iscsi.o 00:02:18.894 LIB libspdk_event_vhost_scsi.a 00:02:18.894 LIB libspdk_event_iscsi.a 00:02:19.468 CC app/spdk_nvme_identify/identify.o 00:02:19.468 CC app/trace_record/trace_record.o 00:02:19.468 CC app/spdk_nvme_discover/discovery_aer.o 00:02:19.468 CXX app/trace/trace.o 00:02:19.468 CC app/spdk_nvme_perf/perf.o 00:02:19.468 CC app/spdk_top/spdk_top.o 00:02:19.468 CC app/spdk_lspci/spdk_lspci.o 00:02:19.468 TEST_HEADER include/spdk/accel_module.h 00:02:19.468 TEST_HEADER include/spdk/accel.h 00:02:19.468 TEST_HEADER include/spdk/barrier.h 00:02:19.468 TEST_HEADER include/spdk/base64.h 00:02:19.468 TEST_HEADER include/spdk/assert.h 00:02:19.468 TEST_HEADER include/spdk/bdev_module.h 00:02:19.468 TEST_HEADER include/spdk/bdev.h 00:02:19.468 TEST_HEADER include/spdk/bdev_zone.h 00:02:19.468 TEST_HEADER include/spdk/bit_pool.h 00:02:19.468 TEST_HEADER include/spdk/bit_array.h 00:02:19.468 TEST_HEADER include/spdk/blob_bdev.h 00:02:19.468 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:19.468 TEST_HEADER include/spdk/blob.h 00:02:19.468 TEST_HEADER include/spdk/blobfs.h 00:02:19.468 TEST_HEADER include/spdk/conf.h 00:02:19.468 TEST_HEADER include/spdk/config.h 00:02:19.468 TEST_HEADER include/spdk/cpuset.h 00:02:19.468 CC app/spdk_dd/spdk_dd.o 00:02:19.468 CC test/rpc_client/rpc_client_test.o 00:02:19.468 TEST_HEADER include/spdk/crc16.h 00:02:19.468 TEST_HEADER include/spdk/crc64.h 00:02:19.468 TEST_HEADER include/spdk/crc32.h 00:02:19.468 TEST_HEADER include/spdk/dif.h 00:02:19.468 TEST_HEADER include/spdk/endian.h 00:02:19.468 TEST_HEADER include/spdk/dma.h 00:02:19.468 TEST_HEADER include/spdk/env_dpdk.h 00:02:19.468 TEST_HEADER include/spdk/env.h 00:02:19.468 TEST_HEADER include/spdk/event.h 00:02:19.468 TEST_HEADER include/spdk/fd_group.h 00:02:19.468 TEST_HEADER include/spdk/fd.h 00:02:19.468 TEST_HEADER include/spdk/file.h 00:02:19.468 TEST_HEADER include/spdk/ftl.h 00:02:19.468 TEST_HEADER include/spdk/gpt_spec.h 00:02:19.468 TEST_HEADER include/spdk/hexlify.h 00:02:19.468 TEST_HEADER include/spdk/histogram_data.h 00:02:19.468 TEST_HEADER include/spdk/idxd.h 00:02:19.468 TEST_HEADER include/spdk/init.h 00:02:19.468 TEST_HEADER include/spdk/idxd_spec.h 00:02:19.468 TEST_HEADER include/spdk/ioat.h 00:02:19.468 TEST_HEADER include/spdk/ioat_spec.h 00:02:19.468 TEST_HEADER include/spdk/iscsi_spec.h 00:02:19.468 TEST_HEADER include/spdk/json.h 00:02:19.468 TEST_HEADER include/spdk/jsonrpc.h 00:02:19.468 TEST_HEADER include/spdk/keyring.h 00:02:19.468 TEST_HEADER include/spdk/keyring_module.h 00:02:19.468 TEST_HEADER include/spdk/likely.h 00:02:19.468 CC app/iscsi_tgt/iscsi_tgt.o 00:02:19.468 TEST_HEADER include/spdk/log.h 00:02:19.468 TEST_HEADER include/spdk/lvol.h 00:02:19.468 TEST_HEADER include/spdk/memory.h 00:02:19.468 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:19.468 TEST_HEADER include/spdk/mmio.h 00:02:19.468 TEST_HEADER include/spdk/nbd.h 00:02:19.468 CC app/nvmf_tgt/nvmf_main.o 00:02:19.468 TEST_HEADER include/spdk/notify.h 00:02:19.468 TEST_HEADER include/spdk/nvme.h 00:02:19.468 TEST_HEADER include/spdk/nvme_intel.h 00:02:19.468 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:19.468 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:19.468 TEST_HEADER include/spdk/nvme_spec.h 00:02:19.468 TEST_HEADER include/spdk/nvme_zns.h 00:02:19.468 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:19.468 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:19.468 TEST_HEADER include/spdk/nvmf.h 00:02:19.468 TEST_HEADER include/spdk/nvmf_spec.h 00:02:19.468 TEST_HEADER include/spdk/nvmf_transport.h 00:02:19.468 TEST_HEADER include/spdk/opal.h 00:02:19.468 TEST_HEADER include/spdk/opal_spec.h 00:02:19.468 TEST_HEADER include/spdk/pci_ids.h 00:02:19.468 TEST_HEADER include/spdk/pipe.h 00:02:19.468 TEST_HEADER include/spdk/queue.h 00:02:19.468 TEST_HEADER include/spdk/reduce.h 00:02:19.468 TEST_HEADER include/spdk/rpc.h 00:02:19.468 TEST_HEADER include/spdk/scheduler.h 00:02:19.468 TEST_HEADER include/spdk/scsi.h 00:02:19.468 TEST_HEADER include/spdk/scsi_spec.h 00:02:19.468 TEST_HEADER include/spdk/sock.h 00:02:19.468 TEST_HEADER include/spdk/stdinc.h 00:02:19.468 TEST_HEADER include/spdk/string.h 00:02:19.468 TEST_HEADER include/spdk/thread.h 00:02:19.468 TEST_HEADER include/spdk/trace.h 00:02:19.468 TEST_HEADER include/spdk/trace_parser.h 00:02:19.468 CC app/spdk_tgt/spdk_tgt.o 00:02:19.468 TEST_HEADER include/spdk/tree.h 00:02:19.468 TEST_HEADER include/spdk/ublk.h 00:02:19.468 TEST_HEADER include/spdk/util.h 00:02:19.468 TEST_HEADER include/spdk/version.h 00:02:19.468 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:19.468 TEST_HEADER include/spdk/uuid.h 00:02:19.468 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:19.468 TEST_HEADER include/spdk/vhost.h 00:02:19.468 TEST_HEADER include/spdk/vmd.h 00:02:19.468 TEST_HEADER include/spdk/xor.h 00:02:19.468 TEST_HEADER include/spdk/zipf.h 00:02:19.468 CXX test/cpp_headers/accel.o 00:02:19.468 CXX test/cpp_headers/accel_module.o 00:02:19.468 CXX test/cpp_headers/assert.o 00:02:19.468 CXX test/cpp_headers/barrier.o 00:02:19.468 CXX test/cpp_headers/base64.o 00:02:19.468 CXX test/cpp_headers/bdev.o 00:02:19.468 CXX test/cpp_headers/bdev_module.o 00:02:19.468 CXX test/cpp_headers/bdev_zone.o 00:02:19.468 CXX test/cpp_headers/bit_array.o 00:02:19.468 CXX test/cpp_headers/blob_bdev.o 00:02:19.468 CXX test/cpp_headers/bit_pool.o 00:02:19.468 CXX test/cpp_headers/blobfs_bdev.o 00:02:19.468 CXX test/cpp_headers/blobfs.o 00:02:19.468 CXX test/cpp_headers/blob.o 00:02:19.468 CXX test/cpp_headers/conf.o 00:02:19.468 CXX test/cpp_headers/config.o 00:02:19.468 CXX test/cpp_headers/cpuset.o 00:02:19.468 CXX test/cpp_headers/crc16.o 00:02:19.468 CXX test/cpp_headers/crc32.o 00:02:19.468 CXX test/cpp_headers/crc64.o 00:02:19.468 CXX test/cpp_headers/dma.o 00:02:19.468 CXX test/cpp_headers/dif.o 00:02:19.469 CXX test/cpp_headers/endian.o 00:02:19.469 CXX test/cpp_headers/env_dpdk.o 00:02:19.469 CXX test/cpp_headers/env.o 00:02:19.469 CXX test/cpp_headers/event.o 00:02:19.469 CXX test/cpp_headers/fd_group.o 00:02:19.469 CXX test/cpp_headers/fd.o 00:02:19.469 CXX test/cpp_headers/file.o 00:02:19.469 CXX test/cpp_headers/ftl.o 00:02:19.469 CXX test/cpp_headers/gpt_spec.o 00:02:19.469 CXX test/cpp_headers/hexlify.o 00:02:19.469 CXX test/cpp_headers/histogram_data.o 00:02:19.469 CC examples/util/zipf/zipf.o 00:02:19.469 CXX test/cpp_headers/idxd.o 00:02:19.469 CXX test/cpp_headers/idxd_spec.o 00:02:19.469 CXX test/cpp_headers/init.o 00:02:19.469 CXX test/cpp_headers/ioat.o 00:02:19.469 CC test/env/vtophys/vtophys.o 00:02:19.469 CC test/thread/poller_perf/poller_perf.o 00:02:19.469 CXX test/cpp_headers/ioat_spec.o 00:02:19.469 CXX test/cpp_headers/iscsi_spec.o 00:02:19.469 CXX test/cpp_headers/json.o 00:02:19.469 CXX test/cpp_headers/jsonrpc.o 00:02:19.469 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:19.469 CC test/thread/lock/spdk_lock.o 00:02:19.469 CC test/env/memory/memory_ut.o 00:02:19.469 CC test/app/histogram_perf/histogram_perf.o 00:02:19.469 CC test/env/pci/pci_ut.o 00:02:19.469 CC examples/ioat/verify/verify.o 00:02:19.469 CC test/app/jsoncat/jsoncat.o 00:02:19.469 CC app/fio/nvme/fio_plugin.o 00:02:19.469 CC examples/ioat/perf/perf.o 00:02:19.469 CC test/app/stub/stub.o 00:02:19.469 LINK spdk_lspci 00:02:19.469 CC test/dma/test_dma/test_dma.o 00:02:19.469 CC app/fio/bdev/fio_plugin.o 00:02:19.469 CC test/app/bdev_svc/bdev_svc.o 00:02:19.469 CC test/env/mem_callbacks/mem_callbacks.o 00:02:19.469 LINK spdk_nvme_discover 00:02:19.469 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:19.469 LINK rpc_client_test 00:02:19.728 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:19.728 LINK spdk_trace_record 00:02:19.728 LINK poller_perf 00:02:19.728 LINK vtophys 00:02:19.728 LINK nvmf_tgt 00:02:19.728 LINK interrupt_tgt 00:02:19.728 CXX test/cpp_headers/keyring.o 00:02:19.728 LINK zipf 00:02:19.728 LINK jsoncat 00:02:19.728 CXX test/cpp_headers/keyring_module.o 00:02:19.728 CXX test/cpp_headers/likely.o 00:02:19.728 CXX test/cpp_headers/log.o 00:02:19.728 CXX test/cpp_headers/lvol.o 00:02:19.728 CXX test/cpp_headers/memory.o 00:02:19.728 CXX test/cpp_headers/mmio.o 00:02:19.728 CXX test/cpp_headers/nbd.o 00:02:19.728 CXX test/cpp_headers/notify.o 00:02:19.728 LINK histogram_perf 00:02:19.728 CXX test/cpp_headers/nvme.o 00:02:19.728 CXX test/cpp_headers/nvme_intel.o 00:02:19.728 CXX test/cpp_headers/nvme_ocssd.o 00:02:19.728 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:19.728 CXX test/cpp_headers/nvme_spec.o 00:02:19.728 CXX test/cpp_headers/nvme_zns.o 00:02:19.728 CXX test/cpp_headers/nvmf_cmd.o 00:02:19.728 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:19.728 CXX test/cpp_headers/nvmf.o 00:02:19.728 CXX test/cpp_headers/nvmf_spec.o 00:02:19.728 CXX test/cpp_headers/nvmf_transport.o 00:02:19.728 LINK env_dpdk_post_init 00:02:19.728 CXX test/cpp_headers/opal.o 00:02:19.728 CXX test/cpp_headers/opal_spec.o 00:02:19.729 CXX test/cpp_headers/pci_ids.o 00:02:19.729 CXX test/cpp_headers/pipe.o 00:02:19.729 CXX test/cpp_headers/queue.o 00:02:19.729 CXX test/cpp_headers/reduce.o 00:02:19.729 CXX test/cpp_headers/rpc.o 00:02:19.729 CXX test/cpp_headers/scheduler.o 00:02:19.729 CXX test/cpp_headers/scsi.o 00:02:19.729 CXX test/cpp_headers/scsi_spec.o 00:02:19.729 CXX test/cpp_headers/sock.o 00:02:19.729 CXX test/cpp_headers/stdinc.o 00:02:19.729 LINK iscsi_tgt 00:02:19.729 CXX test/cpp_headers/string.o 00:02:19.729 CXX test/cpp_headers/thread.o 00:02:19.729 CXX test/cpp_headers/trace.o 00:02:19.729 LINK stub 00:02:19.729 CXX test/cpp_headers/trace_parser.o 00:02:19.729 LINK spdk_tgt 00:02:19.729 fio_plugin.c:1582:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:02:19.729 struct spdk_nvme_fdp_ruhs ruhs; 00:02:19.729 ^ 00:02:19.729 CXX test/cpp_headers/tree.o 00:02:19.729 CXX test/cpp_headers/ublk.o 00:02:19.729 CXX test/cpp_headers/util.o 00:02:19.729 LINK ioat_perf 00:02:19.729 LINK verify 00:02:19.729 LINK bdev_svc 00:02:19.729 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:19.729 CXX test/cpp_headers/uuid.o 00:02:19.987 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:19.987 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:19.987 CXX test/cpp_headers/version.o 00:02:19.987 LINK spdk_trace 00:02:19.987 CXX test/cpp_headers/vfio_user_pci.o 00:02:19.987 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:19.987 CXX test/cpp_headers/vfio_user_spec.o 00:02:19.987 CXX test/cpp_headers/vhost.o 00:02:19.987 CXX test/cpp_headers/vmd.o 00:02:19.987 CXX test/cpp_headers/xor.o 00:02:19.987 CXX test/cpp_headers/zipf.o 00:02:19.987 LINK test_dma 00:02:19.987 LINK pci_ut 00:02:19.987 LINK spdk_dd 00:02:19.987 LINK nvme_fuzz 00:02:20.246 1 warning generated. 00:02:20.246 LINK spdk_nvme_identify 00:02:20.246 LINK mem_callbacks 00:02:20.246 LINK spdk_bdev 00:02:20.246 LINK spdk_nvme 00:02:20.246 LINK vhost_fuzz 00:02:20.246 LINK llvm_vfio_fuzz 00:02:20.246 LINK spdk_nvme_perf 00:02:20.246 CC examples/sock/hello_world/hello_sock.o 00:02:20.246 CC examples/vmd/lsvmd/lsvmd.o 00:02:20.246 CC examples/vmd/led/led.o 00:02:20.246 CC examples/idxd/perf/perf.o 00:02:20.505 CC app/vhost/vhost.o 00:02:20.505 CC examples/thread/thread/thread_ex.o 00:02:20.505 LINK spdk_top 00:02:20.506 LINK lsvmd 00:02:20.506 LINK led 00:02:20.506 LINK llvm_nvme_fuzz 00:02:20.506 LINK memory_ut 00:02:20.506 LINK idxd_perf 00:02:20.506 LINK vhost 00:02:20.506 LINK hello_sock 00:02:20.764 LINK thread 00:02:20.764 LINK spdk_lock 00:02:21.022 LINK iscsi_fuzz 00:02:21.282 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:21.282 CC examples/nvme/reconnect/reconnect.o 00:02:21.282 CC examples/nvme/hello_world/hello_world.o 00:02:21.282 CC examples/nvme/hotplug/hotplug.o 00:02:21.282 CC examples/nvme/arbitration/arbitration.o 00:02:21.282 CC examples/nvme/abort/abort.o 00:02:21.282 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:21.282 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:21.282 CC test/event/event_perf/event_perf.o 00:02:21.540 CC test/event/reactor/reactor.o 00:02:21.540 CC test/event/reactor_perf/reactor_perf.o 00:02:21.540 CC test/event/app_repeat/app_repeat.o 00:02:21.540 CC test/event/scheduler/scheduler.o 00:02:21.540 LINK cmb_copy 00:02:21.540 LINK pmr_persistence 00:02:21.540 LINK hello_world 00:02:21.540 LINK hotplug 00:02:21.540 LINK reactor 00:02:21.540 LINK event_perf 00:02:21.540 LINK reactor_perf 00:02:21.540 LINK app_repeat 00:02:21.540 LINK reconnect 00:02:21.540 LINK arbitration 00:02:21.540 LINK abort 00:02:21.799 LINK nvme_manage 00:02:21.800 LINK scheduler 00:02:21.800 CC test/nvme/compliance/nvme_compliance.o 00:02:21.800 CC test/nvme/overhead/overhead.o 00:02:21.800 CC test/nvme/startup/startup.o 00:02:21.800 CC test/nvme/reset/reset.o 00:02:21.800 CC test/nvme/simple_copy/simple_copy.o 00:02:21.800 CC test/accel/dif/dif.o 00:02:21.800 CC test/nvme/sgl/sgl.o 00:02:21.800 CC test/nvme/e2edp/nvme_dp.o 00:02:21.800 CC test/nvme/fused_ordering/fused_ordering.o 00:02:21.800 CC test/nvme/err_injection/err_injection.o 00:02:21.800 CC test/nvme/aer/aer.o 00:02:21.800 CC test/nvme/connect_stress/connect_stress.o 00:02:21.800 CC test/nvme/reserve/reserve.o 00:02:21.800 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:21.800 CC test/nvme/boot_partition/boot_partition.o 00:02:21.800 CC test/nvme/fdp/fdp.o 00:02:22.057 CC test/blobfs/mkfs/mkfs.o 00:02:22.057 CC test/nvme/cuse/cuse.o 00:02:22.057 CC test/lvol/esnap/esnap.o 00:02:22.057 LINK startup 00:02:22.057 LINK boot_partition 00:02:22.057 LINK connect_stress 00:02:22.057 LINK err_injection 00:02:22.057 LINK reserve 00:02:22.057 LINK fused_ordering 00:02:22.057 LINK mkfs 00:02:22.057 LINK simple_copy 00:02:22.057 LINK reset 00:02:22.057 LINK aer 00:02:22.057 LINK nvme_dp 00:02:22.057 LINK overhead 00:02:22.057 LINK sgl 00:02:22.057 LINK fdp 00:02:22.057 LINK doorbell_aers 00:02:22.314 LINK dif 00:02:22.314 LINK nvme_compliance 00:02:22.572 CC examples/accel/perf/accel_perf.o 00:02:22.572 CC examples/blob/hello_world/hello_blob.o 00:02:22.572 CC examples/blob/cli/blobcli.o 00:02:22.830 LINK hello_blob 00:02:22.830 LINK cuse 00:02:22.830 LINK accel_perf 00:02:23.088 LINK blobcli 00:02:23.664 CC examples/bdev/hello_world/hello_bdev.o 00:02:23.664 CC examples/bdev/bdevperf/bdevperf.o 00:02:24.017 LINK hello_bdev 00:02:24.018 CC test/bdev/bdevio/bdevio.o 00:02:24.276 LINK bdevperf 00:02:24.276 LINK bdevio 00:02:25.653 LINK esnap 00:02:25.913 CC examples/nvmf/nvmf/nvmf.o 00:02:26.172 LINK nvmf 00:02:27.552 00:02:27.552 real 0m48.734s 00:02:27.552 user 6m14.281s 00:02:27.552 sys 2m28.963s 00:02:27.552 18:57:07 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:27.552 18:57:07 make -- common/autotest_common.sh@10 -- $ set +x 00:02:27.552 ************************************ 00:02:27.552 END TEST make 00:02:27.552 ************************************ 00:02:27.552 18:57:07 -- common/autotest_common.sh@1142 -- $ return 0 00:02:27.552 18:57:07 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:27.552 18:57:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:27.552 18:57:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:27.552 18:57:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.552 18:57:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:27.552 18:57:07 -- pm/common@44 -- $ pid=550765 00:02:27.552 18:57:07 -- pm/common@50 -- $ kill -TERM 550765 00:02:27.552 18:57:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.552 18:57:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:27.552 18:57:07 -- pm/common@44 -- $ pid=550767 00:02:27.552 18:57:07 -- pm/common@50 -- $ kill -TERM 550767 00:02:27.552 18:57:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.552 18:57:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:27.552 18:57:07 -- pm/common@44 -- $ pid=550769 00:02:27.552 18:57:07 -- pm/common@50 -- $ kill -TERM 550769 00:02:27.552 18:57:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.552 18:57:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:27.552 18:57:07 -- pm/common@44 -- $ pid=550792 00:02:27.552 18:57:07 -- pm/common@50 -- $ sudo -E kill -TERM 550792 00:02:27.552 18:57:07 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:27.552 18:57:07 -- nvmf/common.sh@7 -- # uname -s 00:02:27.552 18:57:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:27.552 18:57:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:27.552 18:57:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:27.552 18:57:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:27.552 18:57:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:27.552 18:57:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:27.552 18:57:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:27.552 18:57:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:27.552 18:57:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:27.552 18:57:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:27.552 18:57:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:02:27.552 18:57:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:02:27.552 18:57:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:27.552 18:57:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:27.552 18:57:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:27.552 18:57:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:27.552 18:57:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:27.552 18:57:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:27.552 18:57:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:27.552 18:57:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:27.552 18:57:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.552 18:57:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.552 18:57:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.552 18:57:07 -- paths/export.sh@5 -- # export PATH 00:02:27.552 18:57:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.552 18:57:07 -- nvmf/common.sh@47 -- # : 0 00:02:27.552 18:57:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:27.552 18:57:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:27.552 18:57:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:27.552 18:57:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:27.552 18:57:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:27.552 18:57:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:27.552 18:57:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:27.552 18:57:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:27.552 18:57:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:27.552 18:57:07 -- spdk/autotest.sh@32 -- # uname -s 00:02:27.552 18:57:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:27.552 18:57:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:27.552 18:57:07 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:27.552 18:57:07 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:27.552 18:57:07 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:27.552 18:57:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:27.552 18:57:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:27.552 18:57:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:27.552 18:57:07 -- spdk/autotest.sh@48 -- # udevadm_pid=610042 00:02:27.552 18:57:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:27.552 18:57:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:27.552 18:57:07 -- pm/common@17 -- # local monitor 00:02:27.552 18:57:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.552 18:57:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.552 18:57:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.553 18:57:07 -- pm/common@21 -- # date +%s 00:02:27.553 18:57:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.553 18:57:07 -- pm/common@21 -- # date +%s 00:02:27.553 18:57:07 -- pm/common@25 -- # sleep 1 00:02:27.553 18:57:07 -- pm/common@21 -- # date +%s 00:02:27.553 18:57:07 -- pm/common@21 -- # date +%s 00:02:27.553 18:57:07 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721062627 00:02:27.553 18:57:07 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721062627 00:02:27.553 18:57:07 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721062627 00:02:27.553 18:57:07 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721062627 00:02:27.812 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721062627_collect-vmstat.pm.log 00:02:27.812 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721062627_collect-cpu-load.pm.log 00:02:27.812 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721062627_collect-cpu-temp.pm.log 00:02:27.812 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721062627_collect-bmc-pm.bmc.pm.log 00:02:28.750 18:57:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:28.750 18:57:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:28.750 18:57:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:28.750 18:57:08 -- common/autotest_common.sh@10 -- # set +x 00:02:28.750 18:57:08 -- spdk/autotest.sh@59 -- # create_test_list 00:02:28.750 18:57:08 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:28.750 18:57:08 -- common/autotest_common.sh@10 -- # set +x 00:02:28.750 18:57:08 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:28.750 18:57:09 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:28.750 18:57:09 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:28.750 18:57:09 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:28.750 18:57:09 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:28.750 18:57:09 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:28.750 18:57:09 -- common/autotest_common.sh@1455 -- # uname 00:02:28.750 18:57:09 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:28.750 18:57:09 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:28.750 18:57:09 -- common/autotest_common.sh@1475 -- # uname 00:02:28.750 18:57:09 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:28.750 18:57:09 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:28.750 18:57:09 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:02:28.750 18:57:09 -- spdk/autotest.sh@72 -- # hash lcov 00:02:28.750 18:57:09 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:02:28.750 18:57:09 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:28.750 18:57:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:28.750 18:57:09 -- common/autotest_common.sh@10 -- # set +x 00:02:28.750 18:57:09 -- spdk/autotest.sh@91 -- # rm -f 00:02:28.750 18:57:09 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:32.946 0000:5e:00.0 (144d a80a): Already using the nvme driver 00:02:32.946 0000:af:00.0 (8086 2701): Already using the nvme driver 00:02:32.946 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:32.946 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:32.946 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:32.946 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:32.946 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:32.946 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:32.946 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:32.946 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:32.946 0000:b0:00.0 (8086 2701): Already using the nvme driver 00:02:32.946 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:32.946 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:32.946 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:32.946 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:32.946 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:32.946 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:32.946 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:32.946 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:32.946 18:57:13 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:32.946 18:57:13 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:32.946 18:57:13 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:32.946 18:57:13 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:32.946 18:57:13 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:32.946 18:57:13 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:32.946 18:57:13 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:32.946 18:57:13 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:32.946 18:57:13 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:32.946 18:57:13 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:32.946 18:57:13 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:02:32.946 18:57:13 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:02:32.946 18:57:13 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:32.946 18:57:13 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:32.946 18:57:13 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:32.946 18:57:13 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:02:32.946 18:57:13 -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:02:32.946 18:57:13 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:02:32.946 18:57:13 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:32.946 18:57:13 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:32.946 18:57:13 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:32.946 18:57:13 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:32.946 18:57:13 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:32.946 18:57:13 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:32.946 18:57:13 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:32.946 No valid GPT data, bailing 00:02:32.946 18:57:13 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:32.946 18:57:13 -- scripts/common.sh@391 -- # pt= 00:02:32.946 18:57:13 -- scripts/common.sh@392 -- # return 1 00:02:32.946 18:57:13 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:32.946 1+0 records in 00:02:32.946 1+0 records out 00:02:32.946 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00198622 s, 528 MB/s 00:02:32.946 18:57:13 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:32.946 18:57:13 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:32.946 18:57:13 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:02:32.946 18:57:13 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:02:32.946 18:57:13 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:02:32.946 No valid GPT data, bailing 00:02:32.946 18:57:13 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:02:32.946 18:57:13 -- scripts/common.sh@391 -- # pt= 00:02:32.946 18:57:13 -- scripts/common.sh@392 -- # return 1 00:02:32.946 18:57:13 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:02:33.206 1+0 records in 00:02:33.206 1+0 records out 00:02:33.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0032041 s, 327 MB/s 00:02:33.206 18:57:13 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:33.206 18:57:13 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:33.206 18:57:13 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:02:33.206 18:57:13 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:02:33.206 18:57:13 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:02:33.206 No valid GPT data, bailing 00:02:33.206 18:57:13 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:02:33.206 18:57:13 -- scripts/common.sh@391 -- # pt= 00:02:33.206 18:57:13 -- scripts/common.sh@392 -- # return 1 00:02:33.206 18:57:13 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:02:33.206 1+0 records in 00:02:33.206 1+0 records out 00:02:33.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00161932 s, 648 MB/s 00:02:33.206 18:57:13 -- spdk/autotest.sh@118 -- # sync 00:02:33.206 18:57:13 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:33.206 18:57:13 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:33.206 18:57:13 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:38.480 18:57:18 -- spdk/autotest.sh@124 -- # uname -s 00:02:38.480 18:57:18 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:38.480 18:57:18 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:38.480 18:57:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:38.480 18:57:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:38.480 18:57:18 -- common/autotest_common.sh@10 -- # set +x 00:02:38.480 ************************************ 00:02:38.480 START TEST setup.sh 00:02:38.480 ************************************ 00:02:38.480 18:57:18 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:38.480 * Looking for test storage... 00:02:38.480 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:38.480 18:57:18 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:38.480 18:57:18 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:38.480 18:57:18 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:38.480 18:57:18 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:38.480 18:57:18 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:38.480 18:57:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:38.480 ************************************ 00:02:38.480 START TEST acl 00:02:38.480 ************************************ 00:02:38.480 18:57:18 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:38.480 * Looking for test storage... 00:02:38.480 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:38.480 18:57:18 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:38.480 18:57:18 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:38.480 18:57:18 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:38.480 18:57:18 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:38.480 18:57:18 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:38.480 18:57:18 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:38.480 18:57:18 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:38.480 18:57:18 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:38.480 18:57:18 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:38.480 18:57:18 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:38.480 18:57:18 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:02:38.480 18:57:18 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:02:38.480 18:57:18 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:38.480 18:57:18 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:38.480 18:57:18 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:38.480 18:57:18 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:02:38.480 18:57:18 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:02:38.480 18:57:18 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:02:38.480 18:57:18 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:38.480 18:57:18 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:38.480 18:57:18 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:38.480 18:57:18 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:38.480 18:57:18 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:38.480 18:57:18 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:38.480 18:57:18 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:38.480 18:57:18 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:42.689 18:57:23 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:42.689 18:57:23 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:42.689 18:57:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:42.689 18:57:23 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:42.689 18:57:23 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:42.689 18:57:23 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:02:46.884 Hugepages 00:02:46.884 node hugesize free / total 00:02:46.884 18:57:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:46.884 18:57:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:46.884 18:57:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.884 18:57:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:46.884 18:57:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:46.884 18:57:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.884 18:57:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:46.884 18:57:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:46.884 18:57:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.884 00:02:46.884 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:46.884 18:57:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:46.884 18:57:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:46.884 18:57:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.884 18:57:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:46.884 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.884 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.884 18:57:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.884 18:57:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:46.884 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.884 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.884 18:57:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.884 18:57:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:46.884 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.884 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.884 18:57:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.884 18:57:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:46.884 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.884 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.884 18:57:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.884 18:57:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.885 18:57:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.885 18:57:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:af:00.0 == *:*:*.* ]] 00:02:46.885 18:57:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:46.885 18:57:27 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\a\f\:\0\0\.\0* ]] 00:02:46.885 18:57:27 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:46.885 18:57:27 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:46.885 18:57:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.885 18:57:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:b0:00.0 == *:*:*.* ]] 00:02:46.885 18:57:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:46.885 18:57:27 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\b\0\:\0\0\.\0* ]] 00:02:46.885 18:57:27 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:46.885 18:57:27 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:46.885 18:57:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.885 18:57:27 setup.sh.acl -- setup/acl.sh@24 -- # (( 3 > 0 )) 00:02:46.885 18:57:27 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:46.885 18:57:27 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:46.885 18:57:27 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:46.885 18:57:27 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:46.885 ************************************ 00:02:46.885 START TEST denied 00:02:46.885 ************************************ 00:02:46.885 18:57:27 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:46.885 18:57:27 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:02:46.885 18:57:27 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:46.885 18:57:27 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:02:46.885 18:57:27 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:46.885 18:57:27 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:51.080 0000:5e:00.0 (144d a80a): Skipping denied controller at 0000:5e:00.0 00:02:51.080 18:57:31 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:02:51.080 18:57:31 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:51.080 18:57:31 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:51.080 18:57:31 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:02:51.080 18:57:31 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:02:51.080 18:57:31 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:51.080 18:57:31 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:51.080 18:57:31 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:51.080 18:57:31 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:51.080 18:57:31 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:56.369 00:02:56.369 real 0m9.227s 00:02:56.369 user 0m2.901s 00:02:56.369 sys 0m5.547s 00:02:56.369 18:57:36 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:56.369 18:57:36 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:56.369 ************************************ 00:02:56.369 END TEST denied 00:02:56.369 ************************************ 00:02:56.369 18:57:36 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:56.369 18:57:36 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:56.369 18:57:36 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:56.369 18:57:36 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:56.369 18:57:36 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:56.369 ************************************ 00:02:56.369 START TEST allowed 00:02:56.369 ************************************ 00:02:56.369 18:57:36 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:02:56.369 18:57:36 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:02:56.369 18:57:36 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:56.369 18:57:36 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:02:56.369 18:57:36 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.369 18:57:36 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:01.644 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:03:01.644 18:57:42 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:af:00.0 0000:b0:00.0 00:03:01.644 18:57:42 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:01.644 18:57:42 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:01.644 18:57:42 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:af:00.0 ]] 00:03:01.644 18:57:42 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:af:00.0/driver 00:03:01.644 18:57:42 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:01.644 18:57:42 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:01.644 18:57:42 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:01.644 18:57:42 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:b0:00.0 ]] 00:03:01.644 18:57:42 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:b0:00.0/driver 00:03:01.644 18:57:42 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:01.644 18:57:42 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:01.644 18:57:42 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:01.644 18:57:42 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:01.644 18:57:42 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:06.924 00:03:06.924 real 0m9.848s 00:03:06.924 user 0m2.800s 00:03:06.924 sys 0m5.461s 00:03:06.924 18:57:46 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:06.924 18:57:46 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:06.924 ************************************ 00:03:06.924 END TEST allowed 00:03:06.924 ************************************ 00:03:06.924 18:57:46 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:06.924 00:03:06.924 real 0m27.649s 00:03:06.924 user 0m8.752s 00:03:06.924 sys 0m16.849s 00:03:06.924 18:57:46 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:06.924 18:57:46 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:06.924 ************************************ 00:03:06.924 END TEST acl 00:03:06.924 ************************************ 00:03:06.924 18:57:46 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:06.924 18:57:46 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:06.924 18:57:46 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:06.924 18:57:46 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:06.924 18:57:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:06.924 ************************************ 00:03:06.924 START TEST hugepages 00:03:06.924 ************************************ 00:03:06.924 18:57:46 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:06.924 * Looking for test storage... 00:03:06.924 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 37889540 kB' 'MemAvailable: 41476352 kB' 'Buffers: 2704 kB' 'Cached: 15236344 kB' 'SwapCached: 0 kB' 'Active: 12390508 kB' 'Inactive: 3465204 kB' 'Active(anon): 11878468 kB' 'Inactive(anon): 0 kB' 'Active(file): 512040 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 619940 kB' 'Mapped: 176112 kB' 'Shmem: 11261804 kB' 'KReclaimable: 213020 kB' 'Slab: 643452 kB' 'SReclaimable: 213020 kB' 'SUnreclaim: 430432 kB' 'KernelStack: 16496 kB' 'PageTables: 8360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439180 kB' 'Committed_AS: 13253732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203588 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2368832 kB' 'DirectMap2M: 30861312 kB' 'DirectMap1G: 35651584 kB' 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.924 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:06.925 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:06.926 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.926 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:06.926 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.926 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:06.926 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:06.926 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:06.926 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:06.926 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:06.926 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:06.926 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:06.926 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:06.926 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:06.926 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:06.926 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:06.926 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:06.926 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:06.926 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:06.926 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:06.926 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:06.926 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:06.926 18:57:46 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:06.926 18:57:46 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:06.926 18:57:46 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:06.926 18:57:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:06.926 ************************************ 00:03:06.926 START TEST default_setup 00:03:06.926 ************************************ 00:03:06.926 18:57:46 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:06.926 18:57:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:06.926 18:57:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:06.926 18:57:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:06.926 18:57:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:06.926 18:57:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:06.926 18:57:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:06.926 18:57:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:06.926 18:57:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:06.926 18:57:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:06.926 18:57:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:06.926 18:57:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:06.926 18:57:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:06.926 18:57:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:06.926 18:57:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:06.926 18:57:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:06.926 18:57:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:06.926 18:57:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:06.926 18:57:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:06.926 18:57:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:06.926 18:57:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:06.926 18:57:46 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.926 18:57:46 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:10.216 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:10.216 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:10.216 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:10.216 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:10.216 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:10.216 0000:af:00.0 (8086 2701): nvme -> vfio-pci 00:03:10.216 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:10.216 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:10.216 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:10.216 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:03:10.216 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:10.216 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:10.216 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:10.216 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:10.216 0000:b0:00.0 (8086 2701): nvme -> vfio-pci 00:03:10.216 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:10.216 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:10.216 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:10.216 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:10.216 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:10.216 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:10.216 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:10.216 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:10.216 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:10.216 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:10.216 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:10.216 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:10.216 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:10.216 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:10.216 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:10.216 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:10.216 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:10.216 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.216 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40091732 kB' 'MemAvailable: 43678048 kB' 'Buffers: 2704 kB' 'Cached: 15236444 kB' 'SwapCached: 0 kB' 'Active: 12408072 kB' 'Inactive: 3465204 kB' 'Active(anon): 11896032 kB' 'Inactive(anon): 0 kB' 'Active(file): 512040 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637016 kB' 'Mapped: 176672 kB' 'Shmem: 11261904 kB' 'KReclaimable: 212032 kB' 'Slab: 641512 kB' 'SReclaimable: 212032 kB' 'SUnreclaim: 429480 kB' 'KernelStack: 16704 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13275388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203668 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2368832 kB' 'DirectMap2M: 30861312 kB' 'DirectMap1G: 35651584 kB' 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.478 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.478 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.478 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.478 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.478 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.478 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.478 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.478 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.478 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.478 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.478 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.478 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.478 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.478 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.478 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.478 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.478 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.478 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.478 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.478 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.478 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.478 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.478 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.478 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.478 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.478 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.478 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.478 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.478 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40098484 kB' 'MemAvailable: 43684768 kB' 'Buffers: 2704 kB' 'Cached: 15236444 kB' 'SwapCached: 0 kB' 'Active: 12407580 kB' 'Inactive: 3465204 kB' 'Active(anon): 11895540 kB' 'Inactive(anon): 0 kB' 'Active(file): 512040 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637044 kB' 'Mapped: 176552 kB' 'Shmem: 11261904 kB' 'KReclaimable: 211968 kB' 'Slab: 641440 kB' 'SReclaimable: 211968 kB' 'SUnreclaim: 429472 kB' 'KernelStack: 16800 kB' 'PageTables: 8496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13275404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203636 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2368832 kB' 'DirectMap2M: 30861312 kB' 'DirectMap1G: 35651584 kB' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.479 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40097976 kB' 'MemAvailable: 43684260 kB' 'Buffers: 2704 kB' 'Cached: 15236464 kB' 'SwapCached: 0 kB' 'Active: 12407480 kB' 'Inactive: 3465204 kB' 'Active(anon): 11895440 kB' 'Inactive(anon): 0 kB' 'Active(file): 512040 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636816 kB' 'Mapped: 176560 kB' 'Shmem: 11261924 kB' 'KReclaimable: 211968 kB' 'Slab: 641440 kB' 'SReclaimable: 211968 kB' 'SUnreclaim: 429472 kB' 'KernelStack: 16800 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13274812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203652 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2368832 kB' 'DirectMap2M: 30861312 kB' 'DirectMap1G: 35651584 kB' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.480 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:10.481 nr_hugepages=1024 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:10.481 resv_hugepages=0 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:10.481 surplus_hugepages=0 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:10.481 anon_hugepages=0 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.481 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40097036 kB' 'MemAvailable: 43683320 kB' 'Buffers: 2704 kB' 'Cached: 15236484 kB' 'SwapCached: 0 kB' 'Active: 12410200 kB' 'Inactive: 3465204 kB' 'Active(anon): 11898160 kB' 'Inactive(anon): 0 kB' 'Active(file): 512040 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 639516 kB' 'Mapped: 177056 kB' 'Shmem: 11261944 kB' 'KReclaimable: 211968 kB' 'Slab: 641440 kB' 'SReclaimable: 211968 kB' 'SUnreclaim: 429472 kB' 'KernelStack: 16688 kB' 'PageTables: 8788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13278932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203668 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2368832 kB' 'DirectMap2M: 30861312 kB' 'DirectMap1G: 35651584 kB' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.482 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32586912 kB' 'MemFree: 22757764 kB' 'MemUsed: 9829148 kB' 'SwapCached: 0 kB' 'Active: 6077724 kB' 'Inactive: 209084 kB' 'Active(anon): 5889408 kB' 'Inactive(anon): 0 kB' 'Active(file): 188316 kB' 'Inactive(file): 209084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5833768 kB' 'Mapped: 61800 kB' 'AnonPages: 456272 kB' 'Shmem: 5436368 kB' 'KernelStack: 9032 kB' 'PageTables: 4988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102912 kB' 'Slab: 332856 kB' 'SReclaimable: 102912 kB' 'SUnreclaim: 229944 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:10.483 node0=1024 expecting 1024 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:10.483 00:03:10.483 real 0m4.130s 00:03:10.483 user 0m1.570s 00:03:10.483 sys 0m2.641s 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:10.483 18:57:50 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:10.483 ************************************ 00:03:10.483 END TEST default_setup 00:03:10.483 ************************************ 00:03:10.483 18:57:50 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:10.483 18:57:50 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:10.483 18:57:50 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:10.483 18:57:50 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:10.483 18:57:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:10.483 ************************************ 00:03:10.483 START TEST per_node_1G_alloc 00:03:10.483 ************************************ 00:03:10.483 18:57:50 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:10.483 18:57:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:10.483 18:57:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:10.483 18:57:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:10.483 18:57:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:10.483 18:57:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:10.483 18:57:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:10.483 18:57:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:10.483 18:57:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:10.483 18:57:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:10.483 18:57:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:10.483 18:57:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:10.483 18:57:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:10.483 18:57:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:10.483 18:57:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:10.483 18:57:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:10.483 18:57:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:10.483 18:57:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:10.483 18:57:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:10.483 18:57:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:10.483 18:57:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:10.483 18:57:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:10.483 18:57:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:10.483 18:57:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:10.483 18:57:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:10.483 18:57:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:10.483 18:57:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.742 18:57:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:14.030 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:03:14.030 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:14.030 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:03:14.030 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:14.030 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:14.030 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:14.030 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:14.030 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:14.030 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:14.030 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:14.030 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:14.030 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:03:14.030 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:14.030 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:14.030 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:14.030 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:14.030 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:14.030 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:14.030 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40128832 kB' 'MemAvailable: 43715116 kB' 'Buffers: 2704 kB' 'Cached: 15236576 kB' 'SwapCached: 0 kB' 'Active: 12404892 kB' 'Inactive: 3465204 kB' 'Active(anon): 11892852 kB' 'Inactive(anon): 0 kB' 'Active(file): 512040 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633500 kB' 'Mapped: 175424 kB' 'Shmem: 11262036 kB' 'KReclaimable: 211968 kB' 'Slab: 641360 kB' 'SReclaimable: 211968 kB' 'SUnreclaim: 429392 kB' 'KernelStack: 16560 kB' 'PageTables: 8104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13260236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203636 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2368832 kB' 'DirectMap2M: 30861312 kB' 'DirectMap1G: 35651584 kB' 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.294 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40130756 kB' 'MemAvailable: 43717040 kB' 'Buffers: 2704 kB' 'Cached: 15236580 kB' 'SwapCached: 0 kB' 'Active: 12404444 kB' 'Inactive: 3465204 kB' 'Active(anon): 11892404 kB' 'Inactive(anon): 0 kB' 'Active(file): 512040 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633668 kB' 'Mapped: 175328 kB' 'Shmem: 11262040 kB' 'KReclaimable: 211968 kB' 'Slab: 641320 kB' 'SReclaimable: 211968 kB' 'SUnreclaim: 429352 kB' 'KernelStack: 16528 kB' 'PageTables: 8020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13261248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203572 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2368832 kB' 'DirectMap2M: 30861312 kB' 'DirectMap1G: 35651584 kB' 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.296 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40131512 kB' 'MemAvailable: 43717796 kB' 'Buffers: 2704 kB' 'Cached: 15236596 kB' 'SwapCached: 0 kB' 'Active: 12404504 kB' 'Inactive: 3465204 kB' 'Active(anon): 11892464 kB' 'Inactive(anon): 0 kB' 'Active(file): 512040 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633700 kB' 'Mapped: 175328 kB' 'Shmem: 11262056 kB' 'KReclaimable: 211968 kB' 'Slab: 641320 kB' 'SReclaimable: 211968 kB' 'SUnreclaim: 429352 kB' 'KernelStack: 16656 kB' 'PageTables: 8020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13261268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203604 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2368832 kB' 'DirectMap2M: 30861312 kB' 'DirectMap1G: 35651584 kB' 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:14.300 nr_hugepages=1024 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:14.300 resv_hugepages=0 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:14.300 surplus_hugepages=0 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:14.300 anon_hugepages=0 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40131896 kB' 'MemAvailable: 43718180 kB' 'Buffers: 2704 kB' 'Cached: 15236596 kB' 'SwapCached: 0 kB' 'Active: 12404460 kB' 'Inactive: 3465204 kB' 'Active(anon): 11892420 kB' 'Inactive(anon): 0 kB' 'Active(file): 512040 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633700 kB' 'Mapped: 175328 kB' 'Shmem: 11262056 kB' 'KReclaimable: 211968 kB' 'Slab: 641320 kB' 'SReclaimable: 211968 kB' 'SUnreclaim: 429352 kB' 'KernelStack: 16752 kB' 'PageTables: 8292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13262048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203668 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2368832 kB' 'DirectMap2M: 30861312 kB' 'DirectMap1G: 35651584 kB' 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:14.301 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32586912 kB' 'MemFree: 23833952 kB' 'MemUsed: 8752960 kB' 'SwapCached: 0 kB' 'Active: 6075052 kB' 'Inactive: 209084 kB' 'Active(anon): 5886736 kB' 'Inactive(anon): 0 kB' 'Active(file): 188316 kB' 'Inactive(file): 209084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5833884 kB' 'Mapped: 61048 kB' 'AnonPages: 453532 kB' 'Shmem: 5436484 kB' 'KernelStack: 9192 kB' 'PageTables: 4848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102912 kB' 'Slab: 332980 kB' 'SReclaimable: 102912 kB' 'SUnreclaim: 230068 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.563 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.563 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.563 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.563 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.563 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.563 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.563 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.563 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.563 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.563 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.563 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.563 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.563 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.563 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.563 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.563 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.563 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.563 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.563 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.563 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.563 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.563 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.563 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.563 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.563 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.563 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.563 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.563 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.563 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27708548 kB' 'MemFree: 16296496 kB' 'MemUsed: 11412052 kB' 'SwapCached: 0 kB' 'Active: 6329268 kB' 'Inactive: 3256120 kB' 'Active(anon): 6005544 kB' 'Inactive(anon): 0 kB' 'Active(file): 323724 kB' 'Inactive(file): 3256120 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9405460 kB' 'Mapped: 114280 kB' 'AnonPages: 179932 kB' 'Shmem: 5825616 kB' 'KernelStack: 7544 kB' 'PageTables: 3280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 109056 kB' 'Slab: 308340 kB' 'SReclaimable: 109056 kB' 'SUnreclaim: 199284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.564 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:14.565 node0=512 expecting 512 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:14.565 node1=512 expecting 512 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:14.565 00:03:14.565 real 0m3.862s 00:03:14.565 user 0m1.429s 00:03:14.565 sys 0m2.498s 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:14.565 18:57:54 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:14.565 ************************************ 00:03:14.565 END TEST per_node_1G_alloc 00:03:14.565 ************************************ 00:03:14.565 18:57:54 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:14.565 18:57:54 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:14.565 18:57:54 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:14.565 18:57:54 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:14.565 18:57:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:14.565 ************************************ 00:03:14.565 START TEST even_2G_alloc 00:03:14.565 ************************************ 00:03:14.565 18:57:54 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:14.565 18:57:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:14.565 18:57:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:14.565 18:57:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:14.565 18:57:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:14.565 18:57:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:14.565 18:57:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:14.565 18:57:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:14.565 18:57:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:14.565 18:57:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:14.565 18:57:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:14.565 18:57:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:14.565 18:57:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:14.565 18:57:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:14.565 18:57:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:14.565 18:57:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:14.565 18:57:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:14.565 18:57:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:14.565 18:57:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:14.565 18:57:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:14.565 18:57:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:14.565 18:57:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:14.565 18:57:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:14.565 18:57:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:14.565 18:57:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:14.565 18:57:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:14.565 18:57:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:14.565 18:57:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.565 18:57:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:17.940 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:03:17.941 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:17.941 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:03:17.941 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:17.941 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:17.941 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:17.941 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:17.941 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:17.941 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:17.941 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:17.941 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:17.941 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:03:17.941 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:17.941 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:17.941 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:17.941 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:17.941 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:17.941 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:17.941 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:18.201 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:18.201 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:18.201 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:18.201 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:18.201 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:18.201 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:18.201 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:18.201 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:18.201 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:18.201 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:18.201 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.201 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.201 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.201 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.201 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.201 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.201 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.201 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.201 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.201 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40121684 kB' 'MemAvailable: 43707968 kB' 'Buffers: 2704 kB' 'Cached: 15236740 kB' 'SwapCached: 0 kB' 'Active: 12405568 kB' 'Inactive: 3465204 kB' 'Active(anon): 11893528 kB' 'Inactive(anon): 0 kB' 'Active(file): 512040 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633980 kB' 'Mapped: 175452 kB' 'Shmem: 11262200 kB' 'KReclaimable: 211968 kB' 'Slab: 641596 kB' 'SReclaimable: 211968 kB' 'SUnreclaim: 429628 kB' 'KernelStack: 16592 kB' 'PageTables: 8164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13260952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203652 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2368832 kB' 'DirectMap2M: 30861312 kB' 'DirectMap1G: 35651584 kB' 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.202 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40122732 kB' 'MemAvailable: 43709016 kB' 'Buffers: 2704 kB' 'Cached: 15236744 kB' 'SwapCached: 0 kB' 'Active: 12404824 kB' 'Inactive: 3465204 kB' 'Active(anon): 11892784 kB' 'Inactive(anon): 0 kB' 'Active(file): 512040 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633724 kB' 'Mapped: 175336 kB' 'Shmem: 11262204 kB' 'KReclaimable: 211968 kB' 'Slab: 641536 kB' 'SReclaimable: 211968 kB' 'SUnreclaim: 429568 kB' 'KernelStack: 16576 kB' 'PageTables: 8092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13260972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203636 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2368832 kB' 'DirectMap2M: 30861312 kB' 'DirectMap1G: 35651584 kB' 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.204 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40122692 kB' 'MemAvailable: 43708976 kB' 'Buffers: 2704 kB' 'Cached: 15236760 kB' 'SwapCached: 0 kB' 'Active: 12404836 kB' 'Inactive: 3465204 kB' 'Active(anon): 11892796 kB' 'Inactive(anon): 0 kB' 'Active(file): 512040 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633720 kB' 'Mapped: 175336 kB' 'Shmem: 11262220 kB' 'KReclaimable: 211968 kB' 'Slab: 641536 kB' 'SReclaimable: 211968 kB' 'SUnreclaim: 429568 kB' 'KernelStack: 16576 kB' 'PageTables: 8092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13260992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203636 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2368832 kB' 'DirectMap2M: 30861312 kB' 'DirectMap1G: 35651584 kB' 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.205 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:18.207 nr_hugepages=1024 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:18.207 resv_hugepages=0 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:18.207 surplus_hugepages=0 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:18.207 anon_hugepages=0 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40122944 kB' 'MemAvailable: 43709228 kB' 'Buffers: 2704 kB' 'Cached: 15236780 kB' 'SwapCached: 0 kB' 'Active: 12404860 kB' 'Inactive: 3465204 kB' 'Active(anon): 11892820 kB' 'Inactive(anon): 0 kB' 'Active(file): 512040 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633724 kB' 'Mapped: 175336 kB' 'Shmem: 11262240 kB' 'KReclaimable: 211968 kB' 'Slab: 641536 kB' 'SReclaimable: 211968 kB' 'SUnreclaim: 429568 kB' 'KernelStack: 16576 kB' 'PageTables: 8092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13261012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203636 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2368832 kB' 'DirectMap2M: 30861312 kB' 'DirectMap1G: 35651584 kB' 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.469 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32586912 kB' 'MemFree: 23836948 kB' 'MemUsed: 8749964 kB' 'SwapCached: 0 kB' 'Active: 6075676 kB' 'Inactive: 209084 kB' 'Active(anon): 5887360 kB' 'Inactive(anon): 0 kB' 'Active(file): 188316 kB' 'Inactive(file): 209084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5834036 kB' 'Mapped: 61032 kB' 'AnonPages: 453848 kB' 'Shmem: 5436636 kB' 'KernelStack: 9000 kB' 'PageTables: 4620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102912 kB' 'Slab: 333084 kB' 'SReclaimable: 102912 kB' 'SUnreclaim: 230172 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.470 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27708548 kB' 'MemFree: 16287512 kB' 'MemUsed: 11421036 kB' 'SwapCached: 0 kB' 'Active: 6329100 kB' 'Inactive: 3256120 kB' 'Active(anon): 6005376 kB' 'Inactive(anon): 0 kB' 'Active(file): 323724 kB' 'Inactive(file): 3256120 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9405472 kB' 'Mapped: 114304 kB' 'AnonPages: 179748 kB' 'Shmem: 5825628 kB' 'KernelStack: 7560 kB' 'PageTables: 3420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 109056 kB' 'Slab: 308452 kB' 'SReclaimable: 109056 kB' 'SUnreclaim: 199396 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.471 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.472 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:18.472 node0=512 expecting 512 00:03:18.473 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.473 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.473 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.473 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:18.473 node1=512 expecting 512 00:03:18.473 18:57:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:18.473 00:03:18.473 real 0m3.848s 00:03:18.473 user 0m1.418s 00:03:18.473 sys 0m2.497s 00:03:18.473 18:57:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:18.473 18:57:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:18.473 ************************************ 00:03:18.473 END TEST even_2G_alloc 00:03:18.473 ************************************ 00:03:18.473 18:57:58 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:18.473 18:57:58 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:18.473 18:57:58 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:18.473 18:57:58 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:18.473 18:57:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:18.473 ************************************ 00:03:18.473 START TEST odd_alloc 00:03:18.473 ************************************ 00:03:18.473 18:57:58 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:18.473 18:57:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:18.473 18:57:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:18.473 18:57:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:18.473 18:57:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.473 18:57:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:18.473 18:57:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:18.473 18:57:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:18.473 18:57:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.473 18:57:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:18.473 18:57:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.473 18:57:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.473 18:57:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.473 18:57:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:18.473 18:57:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:18.473 18:57:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.473 18:57:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:18.473 18:57:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:18.473 18:57:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:18.473 18:57:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.473 18:57:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:18.473 18:57:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:18.473 18:57:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:18.473 18:57:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.473 18:57:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:18.473 18:57:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:18.473 18:57:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:18.473 18:57:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.473 18:57:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:22.676 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:03:22.676 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:22.676 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:03:22.676 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:22.676 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:22.676 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:22.676 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:22.676 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:22.676 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:22.676 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:22.676 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:22.676 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:03:22.676 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:22.676 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:22.676 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:22.676 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:22.676 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:22.676 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:22.676 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40148592 kB' 'MemAvailable: 43734876 kB' 'Buffers: 2704 kB' 'Cached: 15236896 kB' 'SwapCached: 0 kB' 'Active: 12406344 kB' 'Inactive: 3465204 kB' 'Active(anon): 11894304 kB' 'Inactive(anon): 0 kB' 'Active(file): 512040 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634608 kB' 'Mapped: 175476 kB' 'Shmem: 11262356 kB' 'KReclaimable: 211968 kB' 'Slab: 641508 kB' 'SReclaimable: 211968 kB' 'SUnreclaim: 429540 kB' 'KernelStack: 16592 kB' 'PageTables: 8116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486732 kB' 'Committed_AS: 13261624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203652 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2368832 kB' 'DirectMap2M: 30861312 kB' 'DirectMap1G: 35651584 kB' 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.676 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40149004 kB' 'MemAvailable: 43735272 kB' 'Buffers: 2704 kB' 'Cached: 15236900 kB' 'SwapCached: 0 kB' 'Active: 12405576 kB' 'Inactive: 3465204 kB' 'Active(anon): 11893536 kB' 'Inactive(anon): 0 kB' 'Active(file): 512040 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634372 kB' 'Mapped: 175352 kB' 'Shmem: 11262360 kB' 'KReclaimable: 211936 kB' 'Slab: 641484 kB' 'SReclaimable: 211936 kB' 'SUnreclaim: 429548 kB' 'KernelStack: 16592 kB' 'PageTables: 8092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486732 kB' 'Committed_AS: 13261640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203620 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2368832 kB' 'DirectMap2M: 30861312 kB' 'DirectMap1G: 35651584 kB' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.677 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40149048 kB' 'MemAvailable: 43735316 kB' 'Buffers: 2704 kB' 'Cached: 15236916 kB' 'SwapCached: 0 kB' 'Active: 12405592 kB' 'Inactive: 3465204 kB' 'Active(anon): 11893552 kB' 'Inactive(anon): 0 kB' 'Active(file): 512040 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634368 kB' 'Mapped: 175352 kB' 'Shmem: 11262376 kB' 'KReclaimable: 211936 kB' 'Slab: 641484 kB' 'SReclaimable: 211936 kB' 'SUnreclaim: 429548 kB' 'KernelStack: 16592 kB' 'PageTables: 8092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486732 kB' 'Committed_AS: 13261664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203620 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2368832 kB' 'DirectMap2M: 30861312 kB' 'DirectMap1G: 35651584 kB' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.678 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:22.679 nr_hugepages=1025 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.679 resv_hugepages=0 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.679 surplus_hugepages=0 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.679 anon_hugepages=0 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40149048 kB' 'MemAvailable: 43735316 kB' 'Buffers: 2704 kB' 'Cached: 15236932 kB' 'SwapCached: 0 kB' 'Active: 12405480 kB' 'Inactive: 3465204 kB' 'Active(anon): 11893440 kB' 'Inactive(anon): 0 kB' 'Active(file): 512040 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634220 kB' 'Mapped: 175352 kB' 'Shmem: 11262392 kB' 'KReclaimable: 211936 kB' 'Slab: 641484 kB' 'SReclaimable: 211936 kB' 'SUnreclaim: 429548 kB' 'KernelStack: 16576 kB' 'PageTables: 8040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486732 kB' 'Committed_AS: 13261684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203620 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2368832 kB' 'DirectMap2M: 30861312 kB' 'DirectMap1G: 35651584 kB' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.679 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32586912 kB' 'MemFree: 23852984 kB' 'MemUsed: 8733928 kB' 'SwapCached: 0 kB' 'Active: 6075844 kB' 'Inactive: 209084 kB' 'Active(anon): 5887528 kB' 'Inactive(anon): 0 kB' 'Active(file): 188316 kB' 'Inactive(file): 209084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5834176 kB' 'Mapped: 61036 kB' 'AnonPages: 453920 kB' 'Shmem: 5436776 kB' 'KernelStack: 9000 kB' 'PageTables: 4664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102912 kB' 'Slab: 333072 kB' 'SReclaimable: 102912 kB' 'SUnreclaim: 230160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27708548 kB' 'MemFree: 16296756 kB' 'MemUsed: 11411792 kB' 'SwapCached: 0 kB' 'Active: 6329776 kB' 'Inactive: 3256120 kB' 'Active(anon): 6006052 kB' 'Inactive(anon): 0 kB' 'Active(file): 323724 kB' 'Inactive(file): 3256120 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9405484 kB' 'Mapped: 114820 kB' 'AnonPages: 180448 kB' 'Shmem: 5825640 kB' 'KernelStack: 7592 kB' 'PageTables: 3428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 109024 kB' 'Slab: 308412 kB' 'SReclaimable: 109024 kB' 'SUnreclaim: 199388 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.680 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:22.681 node0=512 expecting 513 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:22.681 node1=513 expecting 512 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:22.681 00:03:22.681 real 0m3.852s 00:03:22.681 user 0m1.457s 00:03:22.681 sys 0m2.458s 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:22.681 18:58:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:22.681 ************************************ 00:03:22.681 END TEST odd_alloc 00:03:22.681 ************************************ 00:03:22.681 18:58:02 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:22.681 18:58:02 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:22.681 18:58:02 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:22.681 18:58:02 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:22.681 18:58:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:22.681 ************************************ 00:03:22.681 START TEST custom_alloc 00:03:22.681 ************************************ 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.681 18:58:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:25.974 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:03:25.974 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:25.974 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:03:25.974 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:25.974 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:25.974 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:25.974 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:25.974 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:25.974 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:25.974 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:25.974 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:25.974 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:03:25.974 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:25.974 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:25.974 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:25.974 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:25.974 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:25.974 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:25.974 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:25.974 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:25.974 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:25.974 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:25.974 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:25.974 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:25.974 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:25.974 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 39108112 kB' 'MemAvailable: 42694380 kB' 'Buffers: 2704 kB' 'Cached: 15237048 kB' 'SwapCached: 0 kB' 'Active: 12406416 kB' 'Inactive: 3465204 kB' 'Active(anon): 11894376 kB' 'Inactive(anon): 0 kB' 'Active(file): 512040 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634692 kB' 'Mapped: 175456 kB' 'Shmem: 11262508 kB' 'KReclaimable: 211936 kB' 'Slab: 641408 kB' 'SReclaimable: 211936 kB' 'SUnreclaim: 429472 kB' 'KernelStack: 16656 kB' 'PageTables: 8160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963468 kB' 'Committed_AS: 13262036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203716 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2368832 kB' 'DirectMap2M: 30861312 kB' 'DirectMap1G: 35651584 kB' 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.239 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 39108976 kB' 'MemAvailable: 42695244 kB' 'Buffers: 2704 kB' 'Cached: 15237052 kB' 'SwapCached: 0 kB' 'Active: 12405884 kB' 'Inactive: 3465204 kB' 'Active(anon): 11893844 kB' 'Inactive(anon): 0 kB' 'Active(file): 512040 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634608 kB' 'Mapped: 175368 kB' 'Shmem: 11262512 kB' 'KReclaimable: 211936 kB' 'Slab: 641376 kB' 'SReclaimable: 211936 kB' 'SUnreclaim: 429440 kB' 'KernelStack: 16624 kB' 'PageTables: 8100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963468 kB' 'Committed_AS: 13262052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203716 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2368832 kB' 'DirectMap2M: 30861312 kB' 'DirectMap1G: 35651584 kB' 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.240 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.241 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 39110828 kB' 'MemAvailable: 42697096 kB' 'Buffers: 2704 kB' 'Cached: 15237064 kB' 'SwapCached: 0 kB' 'Active: 12405884 kB' 'Inactive: 3465204 kB' 'Active(anon): 11893844 kB' 'Inactive(anon): 0 kB' 'Active(file): 512040 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634568 kB' 'Mapped: 175368 kB' 'Shmem: 11262524 kB' 'KReclaimable: 211936 kB' 'Slab: 641376 kB' 'SReclaimable: 211936 kB' 'SUnreclaim: 429440 kB' 'KernelStack: 16608 kB' 'PageTables: 8052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963468 kB' 'Committed_AS: 13262076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203716 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2368832 kB' 'DirectMap2M: 30861312 kB' 'DirectMap1G: 35651584 kB' 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.242 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.243 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:26.244 nr_hugepages=1536 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.244 resv_hugepages=0 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.244 surplus_hugepages=0 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.244 anon_hugepages=0 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 39109820 kB' 'MemAvailable: 42696088 kB' 'Buffers: 2704 kB' 'Cached: 15237064 kB' 'SwapCached: 0 kB' 'Active: 12406184 kB' 'Inactive: 3465204 kB' 'Active(anon): 11894144 kB' 'Inactive(anon): 0 kB' 'Active(file): 512040 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634876 kB' 'Mapped: 175368 kB' 'Shmem: 11262524 kB' 'KReclaimable: 211936 kB' 'Slab: 641376 kB' 'SReclaimable: 211936 kB' 'SUnreclaim: 429440 kB' 'KernelStack: 16608 kB' 'PageTables: 8052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963468 kB' 'Committed_AS: 13262096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203716 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2368832 kB' 'DirectMap2M: 30861312 kB' 'DirectMap1G: 35651584 kB' 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.244 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.245 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32586912 kB' 'MemFree: 23861132 kB' 'MemUsed: 8725780 kB' 'SwapCached: 0 kB' 'Active: 6075468 kB' 'Inactive: 209084 kB' 'Active(anon): 5887152 kB' 'Inactive(anon): 0 kB' 'Active(file): 188316 kB' 'Inactive(file): 209084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5834296 kB' 'Mapped: 61036 kB' 'AnonPages: 453532 kB' 'Shmem: 5436896 kB' 'KernelStack: 9016 kB' 'PageTables: 4628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102912 kB' 'Slab: 332976 kB' 'SReclaimable: 102912 kB' 'SUnreclaim: 230064 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.246 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.247 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27708548 kB' 'MemFree: 15249244 kB' 'MemUsed: 12459304 kB' 'SwapCached: 0 kB' 'Active: 6330524 kB' 'Inactive: 3256120 kB' 'Active(anon): 6006800 kB' 'Inactive(anon): 0 kB' 'Active(file): 323724 kB' 'Inactive(file): 3256120 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9405520 kB' 'Mapped: 114332 kB' 'AnonPages: 181176 kB' 'Shmem: 5825676 kB' 'KernelStack: 7608 kB' 'PageTables: 3476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 109024 kB' 'Slab: 308400 kB' 'SReclaimable: 109024 kB' 'SUnreclaim: 199376 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:26.249 node0=512 expecting 512 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:26.249 node1=1024 expecting 1024 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:26.249 00:03:26.249 real 0m3.888s 00:03:26.249 user 0m1.435s 00:03:26.249 sys 0m2.502s 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.249 18:58:06 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:26.249 ************************************ 00:03:26.249 END TEST custom_alloc 00:03:26.249 ************************************ 00:03:26.249 18:58:06 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:26.249 18:58:06 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:26.249 18:58:06 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.249 18:58:06 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.249 18:58:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:26.509 ************************************ 00:03:26.509 START TEST no_shrink_alloc 00:03:26.509 ************************************ 00:03:26.509 18:58:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:26.509 18:58:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:26.509 18:58:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:26.509 18:58:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:26.509 18:58:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:26.509 18:58:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:26.509 18:58:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:26.509 18:58:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.509 18:58:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:26.509 18:58:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:26.509 18:58:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:26.509 18:58:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.509 18:58:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:26.509 18:58:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.509 18:58:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.509 18:58:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.509 18:58:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:26.509 18:58:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:26.509 18:58:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:26.509 18:58:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:26.509 18:58:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:26.509 18:58:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.509 18:58:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:29.802 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:29.802 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:03:29.802 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:03:29.802 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:29.802 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:29.802 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:29.802 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:29.802 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:29.802 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:29.802 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:29.802 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:29.802 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:03:29.802 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:29.802 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:29.802 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:29.802 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:29.802 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:29.802 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:29.802 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40142628 kB' 'MemAvailable: 43728880 kB' 'Buffers: 2704 kB' 'Cached: 15237200 kB' 'SwapCached: 0 kB' 'Active: 12407416 kB' 'Inactive: 3465204 kB' 'Active(anon): 11895376 kB' 'Inactive(anon): 0 kB' 'Active(file): 512040 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635636 kB' 'Mapped: 175460 kB' 'Shmem: 11262660 kB' 'KReclaimable: 211904 kB' 'Slab: 642308 kB' 'SReclaimable: 211904 kB' 'SUnreclaim: 430404 kB' 'KernelStack: 16640 kB' 'PageTables: 8064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13262876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203748 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2368832 kB' 'DirectMap2M: 30861312 kB' 'DirectMap1G: 35651584 kB' 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.065 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40142128 kB' 'MemAvailable: 43728380 kB' 'Buffers: 2704 kB' 'Cached: 15237200 kB' 'SwapCached: 0 kB' 'Active: 12407312 kB' 'Inactive: 3465204 kB' 'Active(anon): 11895272 kB' 'Inactive(anon): 0 kB' 'Active(file): 512040 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636000 kB' 'Mapped: 175384 kB' 'Shmem: 11262660 kB' 'KReclaimable: 211904 kB' 'Slab: 642264 kB' 'SReclaimable: 211904 kB' 'SUnreclaim: 430360 kB' 'KernelStack: 16640 kB' 'PageTables: 8092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13262892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203748 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2368832 kB' 'DirectMap2M: 30861312 kB' 'DirectMap1G: 35651584 kB' 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.068 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40141876 kB' 'MemAvailable: 43728128 kB' 'Buffers: 2704 kB' 'Cached: 15237220 kB' 'SwapCached: 0 kB' 'Active: 12407628 kB' 'Inactive: 3465204 kB' 'Active(anon): 11895588 kB' 'Inactive(anon): 0 kB' 'Active(file): 512040 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636316 kB' 'Mapped: 175384 kB' 'Shmem: 11262680 kB' 'KReclaimable: 211904 kB' 'Slab: 642264 kB' 'SReclaimable: 211904 kB' 'SUnreclaim: 430360 kB' 'KernelStack: 16640 kB' 'PageTables: 8092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13263800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203748 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2368832 kB' 'DirectMap2M: 30861312 kB' 'DirectMap1G: 35651584 kB' 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:30.070 nr_hugepages=1024 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.070 resv_hugepages=0 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.070 surplus_hugepages=0 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.070 anon_hugepages=0 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.070 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40145664 kB' 'MemAvailable: 43731916 kB' 'Buffers: 2704 kB' 'Cached: 15237240 kB' 'SwapCached: 0 kB' 'Active: 12407220 kB' 'Inactive: 3465204 kB' 'Active(anon): 11895180 kB' 'Inactive(anon): 0 kB' 'Active(file): 512040 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635932 kB' 'Mapped: 175384 kB' 'Shmem: 11262700 kB' 'KReclaimable: 211904 kB' 'Slab: 642264 kB' 'SReclaimable: 211904 kB' 'SUnreclaim: 430360 kB' 'KernelStack: 16592 kB' 'PageTables: 7964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13262936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203748 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2368832 kB' 'DirectMap2M: 30861312 kB' 'DirectMap1G: 35651584 kB' 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.072 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32586912 kB' 'MemFree: 22793948 kB' 'MemUsed: 9792964 kB' 'SwapCached: 0 kB' 'Active: 6075236 kB' 'Inactive: 209084 kB' 'Active(anon): 5886920 kB' 'Inactive(anon): 0 kB' 'Active(file): 188316 kB' 'Inactive(file): 209084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5834432 kB' 'Mapped: 61036 kB' 'AnonPages: 453120 kB' 'Shmem: 5437032 kB' 'KernelStack: 9000 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102912 kB' 'Slab: 333500 kB' 'SReclaimable: 102912 kB' 'SUnreclaim: 230588 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.333 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:30.334 node0=1024 expecting 1024 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.334 18:58:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:33.628 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:03:33.628 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:33.628 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:03:33.628 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:33.628 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:33.628 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:33.628 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:33.628 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:33.628 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:33.628 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:33.628 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:33.628 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:03:33.628 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:33.628 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:33.628 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:33.628 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:33.628 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:33.628 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:33.628 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:33.893 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40134608 kB' 'MemAvailable: 43720860 kB' 'Buffers: 2704 kB' 'Cached: 15237328 kB' 'SwapCached: 0 kB' 'Active: 12408296 kB' 'Inactive: 3465204 kB' 'Active(anon): 11896256 kB' 'Inactive(anon): 0 kB' 'Active(file): 512040 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636704 kB' 'Mapped: 175480 kB' 'Shmem: 11262788 kB' 'KReclaimable: 211904 kB' 'Slab: 641988 kB' 'SReclaimable: 211904 kB' 'SUnreclaim: 430084 kB' 'KernelStack: 16736 kB' 'PageTables: 7948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13265968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203716 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2368832 kB' 'DirectMap2M: 30861312 kB' 'DirectMap1G: 35651584 kB' 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.893 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.894 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40134772 kB' 'MemAvailable: 43721024 kB' 'Buffers: 2704 kB' 'Cached: 15237332 kB' 'SwapCached: 0 kB' 'Active: 12408620 kB' 'Inactive: 3465204 kB' 'Active(anon): 11896580 kB' 'Inactive(anon): 0 kB' 'Active(file): 512040 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637048 kB' 'Mapped: 175480 kB' 'Shmem: 11262792 kB' 'KReclaimable: 211904 kB' 'Slab: 641988 kB' 'SReclaimable: 211904 kB' 'SUnreclaim: 430084 kB' 'KernelStack: 16704 kB' 'PageTables: 8012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13265736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203748 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2368832 kB' 'DirectMap2M: 30861312 kB' 'DirectMap1G: 35651584 kB' 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.895 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40134712 kB' 'MemAvailable: 43720964 kB' 'Buffers: 2704 kB' 'Cached: 15237356 kB' 'SwapCached: 0 kB' 'Active: 12408676 kB' 'Inactive: 3465204 kB' 'Active(anon): 11896636 kB' 'Inactive(anon): 0 kB' 'Active(file): 512040 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637064 kB' 'Mapped: 175404 kB' 'Shmem: 11262816 kB' 'KReclaimable: 211904 kB' 'Slab: 641996 kB' 'SReclaimable: 211904 kB' 'SUnreclaim: 430092 kB' 'KernelStack: 16816 kB' 'PageTables: 8276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13266004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203780 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2368832 kB' 'DirectMap2M: 30861312 kB' 'DirectMap1G: 35651584 kB' 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:33.898 nr_hugepages=1024 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:33.898 resv_hugepages=0 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:33.898 surplus_hugepages=0 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:33.898 anon_hugepages=0 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.898 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40134264 kB' 'MemAvailable: 43720516 kB' 'Buffers: 2704 kB' 'Cached: 15237376 kB' 'SwapCached: 0 kB' 'Active: 12408424 kB' 'Inactive: 3465204 kB' 'Active(anon): 11896384 kB' 'Inactive(anon): 0 kB' 'Active(file): 512040 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637320 kB' 'Mapped: 175404 kB' 'Shmem: 11262836 kB' 'KReclaimable: 211904 kB' 'Slab: 641996 kB' 'SReclaimable: 211904 kB' 'SUnreclaim: 430092 kB' 'KernelStack: 16752 kB' 'PageTables: 8404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13266028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203780 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2368832 kB' 'DirectMap2M: 30861312 kB' 'DirectMap1G: 35651584 kB' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32586912 kB' 'MemFree: 22778564 kB' 'MemUsed: 9808348 kB' 'SwapCached: 0 kB' 'Active: 6075444 kB' 'Inactive: 209084 kB' 'Active(anon): 5887128 kB' 'Inactive(anon): 0 kB' 'Active(file): 188316 kB' 'Inactive(file): 209084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5834456 kB' 'Mapped: 61044 kB' 'AnonPages: 453168 kB' 'Shmem: 5437056 kB' 'KernelStack: 9272 kB' 'PageTables: 5164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102912 kB' 'Slab: 333384 kB' 'SReclaimable: 102912 kB' 'SUnreclaim: 230472 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.162 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.162 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.162 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.162 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.162 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.162 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.162 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.162 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.162 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.162 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.162 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.162 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.162 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.162 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.162 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.162 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.162 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.162 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.162 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.162 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.162 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.162 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:34.162 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.162 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.162 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.162 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.162 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:34.162 node0=1024 expecting 1024 00:03:34.162 18:58:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:34.162 00:03:34.162 real 0m7.649s 00:03:34.162 user 0m2.902s 00:03:34.162 sys 0m4.878s 00:03:34.162 18:58:14 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:34.162 18:58:14 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:34.162 ************************************ 00:03:34.162 END TEST no_shrink_alloc 00:03:34.162 ************************************ 00:03:34.162 18:58:14 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:34.162 18:58:14 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:34.162 18:58:14 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:34.162 18:58:14 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:34.162 18:58:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.162 18:58:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:34.162 18:58:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.162 18:58:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:34.162 18:58:14 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:34.162 18:58:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.162 18:58:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:34.162 18:58:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.162 18:58:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:34.162 18:58:14 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:34.162 18:58:14 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:34.162 00:03:34.162 real 0m27.910s 00:03:34.162 user 0m10.495s 00:03:34.162 sys 0m17.923s 00:03:34.162 18:58:14 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:34.162 18:58:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:34.162 ************************************ 00:03:34.162 END TEST hugepages 00:03:34.162 ************************************ 00:03:34.162 18:58:14 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:34.162 18:58:14 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:34.162 18:58:14 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:34.162 18:58:14 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.162 18:58:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:34.162 ************************************ 00:03:34.162 START TEST driver 00:03:34.162 ************************************ 00:03:34.162 18:58:14 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:34.162 * Looking for test storage... 00:03:34.162 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:34.162 18:58:14 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:34.162 18:58:14 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:34.162 18:58:14 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:40.728 18:58:19 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:40.728 18:58:19 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.728 18:58:19 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.728 18:58:19 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:40.728 ************************************ 00:03:40.728 START TEST guess_driver 00:03:40.728 ************************************ 00:03:40.728 18:58:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:40.728 18:58:19 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:40.728 18:58:19 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:40.728 18:58:19 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:40.728 18:58:19 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:40.728 18:58:19 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:40.728 18:58:19 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:40.728 18:58:19 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:40.728 18:58:19 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:40.728 18:58:19 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:40.728 18:58:19 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 167 > 0 )) 00:03:40.728 18:58:19 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:40.728 18:58:19 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:40.728 18:58:19 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:40.728 18:58:19 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:40.728 18:58:19 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:40.728 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:40.728 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:40.728 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:40.728 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:40.728 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:40.728 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:40.728 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:40.728 18:58:19 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:40.728 18:58:19 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:40.728 18:58:19 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:40.728 18:58:19 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:40.728 18:58:19 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:40.728 Looking for driver=vfio-pci 00:03:40.728 18:58:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.728 18:58:19 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:40.728 18:58:19 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.728 18:58:19 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:43.265 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.265 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.265 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.265 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.265 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.265 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.265 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.265 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.265 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.265 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.265 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.265 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.265 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.265 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.265 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.265 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.265 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.265 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.265 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.265 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.265 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.265 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.266 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.266 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.266 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.266 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.266 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.266 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.266 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.266 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.266 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.266 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.266 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.266 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.266 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.266 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.266 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.266 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.266 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.266 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.266 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.266 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.266 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.266 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.266 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.525 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.525 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.525 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.525 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.525 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.525 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.525 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.525 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.525 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.525 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.525 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.525 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.525 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:43.525 18:58:23 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:43.525 18:58:23 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:43.525 18:58:23 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:48.800 00:03:48.800 real 0m9.278s 00:03:48.800 user 0m2.950s 00:03:48.800 sys 0m5.530s 00:03:48.800 18:58:29 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.800 18:58:29 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:48.800 ************************************ 00:03:48.800 END TEST guess_driver 00:03:48.800 ************************************ 00:03:49.058 18:58:29 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:49.059 00:03:49.059 real 0m14.780s 00:03:49.059 user 0m4.488s 00:03:49.059 sys 0m8.617s 00:03:49.059 18:58:29 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:49.059 18:58:29 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:49.059 ************************************ 00:03:49.059 END TEST driver 00:03:49.059 ************************************ 00:03:49.059 18:58:29 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:49.059 18:58:29 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:49.059 18:58:29 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:49.059 18:58:29 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:49.059 18:58:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:49.059 ************************************ 00:03:49.059 START TEST devices 00:03:49.059 ************************************ 00:03:49.059 18:58:29 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:49.059 * Looking for test storage... 00:03:49.059 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:49.059 18:58:29 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:49.059 18:58:29 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:49.059 18:58:29 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:49.059 18:58:29 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:53.249 18:58:33 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:53.518 18:58:33 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:53.518 18:58:33 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:53.518 18:58:33 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:53.518 18:58:33 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:53.518 18:58:33 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:53.518 18:58:33 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:53.518 18:58:33 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:53.518 18:58:33 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:53.518 18:58:33 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:53.518 18:58:33 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:53.518 18:58:33 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:53.518 18:58:33 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:53.518 18:58:33 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:53.518 18:58:33 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:53.518 18:58:33 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:03:53.518 18:58:33 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:03:53.518 18:58:33 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:53.518 18:58:33 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:53.518 18:58:33 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:53.518 18:58:33 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:53.518 No valid GPT data, bailing 00:03:53.518 18:58:33 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:53.518 18:58:33 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:53.518 18:58:33 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:53.518 18:58:33 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:53.518 18:58:33 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:53.518 18:58:33 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:af:00.0 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\a\f\:\0\0\.\0* ]] 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:53.518 18:58:33 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:03:53.518 18:58:33 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:03:53.518 No valid GPT data, bailing 00:03:53.518 18:58:33 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:53.518 18:58:33 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:53.518 18:58:33 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:53.518 18:58:33 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:53.518 18:58:33 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:53.518 18:58:33 setup.sh.devices -- setup/common.sh@80 -- # echo 375083606016 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@204 -- # (( 375083606016 >= min_disk_size )) 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:af:00.0 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:b0:00.0 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\b\0\:\0\0\.\0* ]] 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:03:53.518 18:58:33 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:03:53.518 18:58:33 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme2n1 00:03:53.518 No valid GPT data, bailing 00:03:53.518 18:58:33 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:53.518 18:58:33 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:53.518 18:58:33 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:03:53.518 18:58:33 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:03:53.518 18:58:33 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:03:53.518 18:58:33 setup.sh.devices -- setup/common.sh@80 -- # echo 375083606016 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@204 -- # (( 375083606016 >= min_disk_size )) 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:b0:00.0 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@209 -- # (( 3 > 0 )) 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:53.518 18:58:33 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:53.518 18:58:33 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.518 18:58:33 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.518 18:58:33 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:53.518 ************************************ 00:03:53.518 START TEST nvme_mount 00:03:53.518 ************************************ 00:03:53.518 18:58:33 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:53.518 18:58:33 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:53.518 18:58:33 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:53.518 18:58:33 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.518 18:58:33 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:53.518 18:58:33 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:53.518 18:58:33 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:53.518 18:58:33 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:53.518 18:58:33 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:53.518 18:58:33 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:53.518 18:58:33 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:53.518 18:58:33 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:53.518 18:58:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:53.518 18:58:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:53.518 18:58:33 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:53.518 18:58:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:53.518 18:58:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:53.519 18:58:33 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:53.519 18:58:33 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:53.519 18:58:33 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:54.895 Creating new GPT entries in memory. 00:03:54.896 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:54.896 other utilities. 00:03:54.896 18:58:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:54.896 18:58:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:54.896 18:58:34 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:54.896 18:58:34 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:54.896 18:58:34 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:55.830 Creating new GPT entries in memory. 00:03:55.830 The operation has completed successfully. 00:03:55.830 18:58:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:55.830 18:58:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:55.830 18:58:35 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 637704 00:03:55.830 18:58:35 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.830 18:58:35 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:55.830 18:58:35 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.830 18:58:35 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:55.830 18:58:35 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:55.830 18:58:36 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.830 18:58:36 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:55.830 18:58:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:55.830 18:58:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:55.830 18:58:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.830 18:58:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:55.830 18:58:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:55.830 18:58:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:55.830 18:58:36 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:55.830 18:58:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:55.830 18:58:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.830 18:58:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:55.830 18:58:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:55.830 18:58:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.830 18:58:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:af:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:b0:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.118 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.378 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:59.378 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:59.378 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:59.378 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:59.378 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:59.378 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:59.378 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:59.378 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:59.378 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:59.378 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:59.378 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:59.378 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:59.378 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:59.637 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:59.637 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:03:59.637 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:59.637 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:59.637 18:58:39 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:59.637 18:58:39 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:59.637 18:58:39 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:59.637 18:58:39 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:59.637 18:58:39 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:59.637 18:58:39 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:59.637 18:58:40 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:59.637 18:58:40 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:59.637 18:58:40 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:59.637 18:58:40 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:59.637 18:58:40 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:59.637 18:58:40 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:59.637 18:58:40 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:59.637 18:58:40 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:59.637 18:58:40 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:59.637 18:58:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.637 18:58:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:59.637 18:58:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:59.637 18:58:40 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.637 18:58:40 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:02.975 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.975 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:02.975 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:02.975 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.975 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:af:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.975 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.975 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.975 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.975 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.975 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.975 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.975 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.975 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.975 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.975 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.975 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.975 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.975 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.975 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.975 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.975 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.975 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.975 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:b0:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.975 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.252 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.252 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.252 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.252 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.252 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.252 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.252 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.252 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.252 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.252 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.252 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.252 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.252 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.252 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.252 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.252 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.512 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:03.512 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:03.512 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:03.512 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:03.512 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:03.512 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:03.512 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:04:03.512 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:03.512 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:03.512 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:03.512 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:03.512 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:03.512 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:03.512 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:03.512 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.512 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:03.512 18:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:03.512 18:58:43 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.512 18:58:43 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:06.807 18:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.807 18:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:06.807 18:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:06.807 18:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.807 18:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:af:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.807 18:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.807 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.807 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.807 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.807 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.807 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.807 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.807 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.807 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.807 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.807 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.807 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.807 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.807 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.807 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.807 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.807 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.807 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:b0:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.807 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.066 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.066 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.066 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.066 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.066 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.066 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.066 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.066 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.066 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.066 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.066 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.066 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.066 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.066 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.066 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.066 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.066 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:07.066 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:07.066 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:07.066 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:07.066 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.066 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:07.066 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:07.066 18:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:07.066 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:07.067 00:04:07.067 real 0m13.572s 00:04:07.067 user 0m4.113s 00:04:07.067 sys 0m7.441s 00:04:07.067 18:58:47 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.067 18:58:47 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:07.067 ************************************ 00:04:07.067 END TEST nvme_mount 00:04:07.067 ************************************ 00:04:07.326 18:58:47 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:07.326 18:58:47 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:07.326 18:58:47 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.326 18:58:47 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.326 18:58:47 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:07.326 ************************************ 00:04:07.326 START TEST dm_mount 00:04:07.326 ************************************ 00:04:07.326 18:58:47 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:07.326 18:58:47 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:07.326 18:58:47 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:07.326 18:58:47 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:07.326 18:58:47 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:07.326 18:58:47 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:07.326 18:58:47 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:07.326 18:58:47 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:07.326 18:58:47 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:07.326 18:58:47 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:07.326 18:58:47 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:07.326 18:58:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:07.326 18:58:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:07.326 18:58:47 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:07.326 18:58:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:07.326 18:58:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:07.326 18:58:47 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:07.326 18:58:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:07.326 18:58:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:07.326 18:58:47 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:07.326 18:58:47 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:07.326 18:58:47 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:08.265 Creating new GPT entries in memory. 00:04:08.265 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:08.265 other utilities. 00:04:08.265 18:58:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:08.265 18:58:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:08.265 18:58:48 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:08.265 18:58:48 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:08.265 18:58:48 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:09.203 Creating new GPT entries in memory. 00:04:09.203 The operation has completed successfully. 00:04:09.203 18:58:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:09.203 18:58:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:09.203 18:58:49 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:09.203 18:58:49 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:09.203 18:58:49 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:10.211 The operation has completed successfully. 00:04:10.211 18:58:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:10.211 18:58:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:10.211 18:58:50 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 641963 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.471 18:58:50 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:13.778 18:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.778 18:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:13.778 18:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:13.778 18:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.778 18:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:af:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.778 18:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.778 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.778 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.778 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.778 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.778 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.778 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.778 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.778 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.778 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.778 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.778 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.778 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.778 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.778 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.778 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.778 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.778 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:b0:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.778 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.037 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.037 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.037 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.037 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.037 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.037 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.037 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.037 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.037 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.037 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.037 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.037 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.037 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.037 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.037 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.037 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.297 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:14.297 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:14.297 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:14.297 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:14.297 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:14.297 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:14.297 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:14.297 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:14.297 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:14.297 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:14.297 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:14.297 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:14.297 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:14.297 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:14.297 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.297 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:14.297 18:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:14.297 18:58:54 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.297 18:58:54 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:af:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:b0:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.588 18:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.847 18:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:17.847 18:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:17.847 18:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:17.847 18:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:17.847 18:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:17.847 18:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:17.847 18:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:17.847 18:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:17.847 18:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:17.847 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:17.847 18:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:17.847 18:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:17.847 00:04:17.847 real 0m10.688s 00:04:17.847 user 0m2.712s 00:04:17.847 sys 0m5.067s 00:04:17.847 18:58:58 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.847 18:58:58 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:17.847 ************************************ 00:04:17.847 END TEST dm_mount 00:04:17.847 ************************************ 00:04:18.106 18:58:58 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:18.106 18:58:58 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:18.106 18:58:58 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:18.106 18:58:58 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.106 18:58:58 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:18.106 18:58:58 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:18.106 18:58:58 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:18.106 18:58:58 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:18.364 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:18.364 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:18.364 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:18.364 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:18.364 18:58:58 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:18.364 18:58:58 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:18.364 18:58:58 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:18.364 18:58:58 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:18.364 18:58:58 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:18.365 18:58:58 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:18.365 18:58:58 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:18.365 00:04:18.365 real 0m29.258s 00:04:18.365 user 0m8.580s 00:04:18.365 sys 0m15.696s 00:04:18.365 18:58:58 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.365 18:58:58 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:18.365 ************************************ 00:04:18.365 END TEST devices 00:04:18.365 ************************************ 00:04:18.365 18:58:58 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:18.365 00:04:18.365 real 1m40.067s 00:04:18.365 user 0m32.489s 00:04:18.365 sys 0m59.421s 00:04:18.365 18:58:58 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.365 18:58:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:18.365 ************************************ 00:04:18.365 END TEST setup.sh 00:04:18.365 ************************************ 00:04:18.365 18:58:58 -- common/autotest_common.sh@1142 -- # return 0 00:04:18.365 18:58:58 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:04:22.557 Hugepages 00:04:22.557 node hugesize free / total 00:04:22.557 node0 1048576kB 0 / 0 00:04:22.557 node0 2048kB 2048 / 2048 00:04:22.557 node1 1048576kB 0 / 0 00:04:22.557 node1 2048kB 0 / 0 00:04:22.557 00:04:22.557 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:22.557 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:22.558 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:22.558 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:22.558 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:22.558 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:22.558 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:22.558 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:22.558 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:22.558 NVMe 0000:5e:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:22.558 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:22.558 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:22.558 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:22.558 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:22.558 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:22.558 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:22.558 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:22.558 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:22.558 NVMe 0000:af:00.0 8086 2701 1 nvme nvme1 nvme1n1 00:04:22.558 NVMe 0000:b0:00.0 8086 2701 1 nvme nvme2 nvme2n1 00:04:22.558 18:59:02 -- spdk/autotest.sh@130 -- # uname -s 00:04:22.558 18:59:02 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:22.558 18:59:02 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:22.558 18:59:02 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:26.751 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:26.751 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:26.751 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:26.751 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:26.751 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:26.751 0000:af:00.0 (8086 2701): nvme -> vfio-pci 00:04:26.751 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:26.751 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:26.751 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:26.751 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:26.751 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:26.751 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:26.751 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:26.751 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:26.751 0000:b0:00.0 (8086 2701): nvme -> vfio-pci 00:04:26.751 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:26.751 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:26.751 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:27.688 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:04:27.947 18:59:08 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:28.885 18:59:09 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:28.885 18:59:09 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:28.885 18:59:09 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:28.885 18:59:09 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:28.885 18:59:09 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:28.885 18:59:09 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:28.885 18:59:09 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:28.885 18:59:09 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:28.885 18:59:09 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:29.143 18:59:09 -- common/autotest_common.sh@1515 -- # (( 3 == 0 )) 00:04:29.143 18:59:09 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 0000:af:00.0 0000:b0:00.0 00:04:29.143 18:59:09 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:32.433 Waiting for block devices as requested 00:04:32.692 0000:5e:00.0 (144d a80a): vfio-pci -> nvme 00:04:32.692 0000:af:00.0 (8086 2701): vfio-pci -> nvme 00:04:32.951 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:32.951 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:32.951 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:33.209 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:33.209 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:33.209 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:33.469 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:33.469 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:33.469 0000:b0:00.0 (8086 2701): vfio-pci -> nvme 00:04:33.728 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:33.728 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:33.987 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:33.987 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:33.987 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:34.245 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:34.245 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:34.245 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:34.504 18:59:14 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:34.504 18:59:14 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:34.504 18:59:14 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:04:34.504 18:59:14 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:04:34.504 18:59:14 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:34.504 18:59:14 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:34.504 18:59:14 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:34.504 18:59:14 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:34.504 18:59:14 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:34.504 18:59:14 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:34.504 18:59:14 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:34.504 18:59:14 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:34.504 18:59:14 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:34.504 18:59:14 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:04:34.504 18:59:14 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:34.504 18:59:14 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:34.504 18:59:14 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:34.504 18:59:14 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:34.504 18:59:14 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:34.504 18:59:14 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:34.504 18:59:14 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:34.504 18:59:14 -- common/autotest_common.sh@1557 -- # continue 00:04:34.504 18:59:14 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:34.504 18:59:14 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:af:00.0 00:04:34.504 18:59:14 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:04:34.504 18:59:14 -- common/autotest_common.sh@1502 -- # grep 0000:af:00.0/nvme/nvme 00:04:34.504 18:59:14 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:ae/0000:ae:00.0/0000:af:00.0/nvme/nvme1 00:04:34.504 18:59:14 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:ae/0000:ae:00.0/0000:af:00.0/nvme/nvme1 ]] 00:04:34.504 18:59:14 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:ae/0000:ae:00.0/0000:af:00.0/nvme/nvme1 00:04:34.504 18:59:14 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:34.504 18:59:14 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:34.504 18:59:14 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:34.504 18:59:14 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:34.504 18:59:14 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:34.504 18:59:14 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:34.504 18:59:14 -- common/autotest_common.sh@1545 -- # oacs=' 0x7' 00:04:34.504 18:59:14 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=0 00:04:34.504 18:59:14 -- common/autotest_common.sh@1548 -- # [[ 0 -ne 0 ]] 00:04:34.504 18:59:14 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:34.504 18:59:14 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:b0:00.0 00:04:34.504 18:59:14 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:04:34.504 18:59:14 -- common/autotest_common.sh@1502 -- # grep 0000:b0:00.0/nvme/nvme 00:04:34.504 18:59:14 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:ae/0000:ae:02.0/0000:b0:00.0/nvme/nvme2 00:04:34.504 18:59:14 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:ae/0000:ae:02.0/0000:b0:00.0/nvme/nvme2 ]] 00:04:34.504 18:59:14 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:ae/0000:ae:02.0/0000:b0:00.0/nvme/nvme2 00:04:34.504 18:59:14 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme2 00:04:34.504 18:59:14 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme2 00:04:34.504 18:59:14 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme2 ]] 00:04:34.504 18:59:14 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme2 00:04:34.504 18:59:14 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:34.504 18:59:14 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:34.504 18:59:14 -- common/autotest_common.sh@1545 -- # oacs=' 0x7' 00:04:34.504 18:59:14 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=0 00:04:34.504 18:59:14 -- common/autotest_common.sh@1548 -- # [[ 0 -ne 0 ]] 00:04:34.504 18:59:14 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:34.504 18:59:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:34.504 18:59:14 -- common/autotest_common.sh@10 -- # set +x 00:04:34.504 18:59:14 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:34.504 18:59:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:34.504 18:59:14 -- common/autotest_common.sh@10 -- # set +x 00:04:34.504 18:59:14 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:38.697 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:38.697 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:38.697 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:38.697 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:38.697 0000:af:00.0 (8086 2701): nvme -> vfio-pci 00:04:38.697 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:38.697 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:38.697 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:38.697 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:04:38.697 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:38.697 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:38.697 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:38.697 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:38.697 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:38.697 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:38.697 0000:b0:00.0 (8086 2701): nvme -> vfio-pci 00:04:38.697 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:38.697 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:38.697 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:38.697 18:59:18 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:38.697 18:59:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:38.697 18:59:18 -- common/autotest_common.sh@10 -- # set +x 00:04:38.697 18:59:18 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:38.697 18:59:18 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:38.697 18:59:18 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:38.697 18:59:18 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:38.697 18:59:18 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:38.697 18:59:18 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:38.697 18:59:18 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:38.697 18:59:18 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:38.697 18:59:18 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:38.697 18:59:18 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:38.697 18:59:18 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:38.697 18:59:19 -- common/autotest_common.sh@1515 -- # (( 3 == 0 )) 00:04:38.697 18:59:19 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 0000:af:00.0 0000:b0:00.0 00:04:38.697 18:59:19 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:38.697 18:59:19 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:38.697 18:59:19 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:04:38.697 18:59:19 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:38.697 18:59:19 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:38.697 18:59:19 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:af:00.0/device 00:04:38.697 18:59:19 -- common/autotest_common.sh@1580 -- # device=0x2701 00:04:38.697 18:59:19 -- common/autotest_common.sh@1581 -- # [[ 0x2701 == \0\x\0\a\5\4 ]] 00:04:38.697 18:59:19 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:38.697 18:59:19 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:b0:00.0/device 00:04:38.697 18:59:19 -- common/autotest_common.sh@1580 -- # device=0x2701 00:04:38.697 18:59:19 -- common/autotest_common.sh@1581 -- # [[ 0x2701 == \0\x\0\a\5\4 ]] 00:04:38.697 18:59:19 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:38.697 18:59:19 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:38.697 18:59:19 -- common/autotest_common.sh@1593 -- # return 0 00:04:38.697 18:59:19 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:38.697 18:59:19 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:38.697 18:59:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:38.697 18:59:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:38.697 18:59:19 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:38.697 18:59:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:38.697 18:59:19 -- common/autotest_common.sh@10 -- # set +x 00:04:38.697 18:59:19 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:38.697 18:59:19 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:38.697 18:59:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.697 18:59:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.697 18:59:19 -- common/autotest_common.sh@10 -- # set +x 00:04:38.697 ************************************ 00:04:38.697 START TEST env 00:04:38.697 ************************************ 00:04:38.697 18:59:19 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:38.956 * Looking for test storage... 00:04:38.956 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:04:38.956 18:59:19 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:38.956 18:59:19 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.956 18:59:19 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.956 18:59:19 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.956 ************************************ 00:04:38.956 START TEST env_memory 00:04:38.956 ************************************ 00:04:38.956 18:59:19 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:38.956 00:04:38.956 00:04:38.956 CUnit - A unit testing framework for C - Version 2.1-3 00:04:38.956 http://cunit.sourceforge.net/ 00:04:38.956 00:04:38.956 00:04:38.956 Suite: memory 00:04:38.956 Test: alloc and free memory map ...[2024-07-15 18:59:19.305829] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:38.956 passed 00:04:38.956 Test: mem map translation ...[2024-07-15 18:59:19.319610] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:38.956 [2024-07-15 18:59:19.319629] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:38.956 [2024-07-15 18:59:19.319661] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:38.956 [2024-07-15 18:59:19.319671] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:38.956 passed 00:04:38.956 Test: mem map registration ...[2024-07-15 18:59:19.340429] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:38.956 [2024-07-15 18:59:19.340450] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:38.956 passed 00:04:38.956 Test: mem map adjacent registrations ...passed 00:04:38.956 00:04:38.956 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.956 suites 1 1 n/a 0 0 00:04:38.956 tests 4 4 4 0 0 00:04:38.956 asserts 152 152 152 0 n/a 00:04:38.956 00:04:38.956 Elapsed time = 0.086 seconds 00:04:38.956 00:04:38.956 real 0m0.100s 00:04:38.956 user 0m0.086s 00:04:38.956 sys 0m0.013s 00:04:38.956 18:59:19 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.956 18:59:19 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:38.956 ************************************ 00:04:38.956 END TEST env_memory 00:04:38.956 ************************************ 00:04:39.216 18:59:19 env -- common/autotest_common.sh@1142 -- # return 0 00:04:39.216 18:59:19 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:39.216 18:59:19 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.216 18:59:19 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.216 18:59:19 env -- common/autotest_common.sh@10 -- # set +x 00:04:39.216 ************************************ 00:04:39.216 START TEST env_vtophys 00:04:39.216 ************************************ 00:04:39.216 18:59:19 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:39.216 EAL: lib.eal log level changed from notice to debug 00:04:39.216 EAL: Detected lcore 0 as core 0 on socket 0 00:04:39.216 EAL: Detected lcore 1 as core 1 on socket 0 00:04:39.216 EAL: Detected lcore 2 as core 2 on socket 0 00:04:39.216 EAL: Detected lcore 3 as core 3 on socket 0 00:04:39.216 EAL: Detected lcore 4 as core 4 on socket 0 00:04:39.216 EAL: Detected lcore 5 as core 8 on socket 0 00:04:39.216 EAL: Detected lcore 6 as core 9 on socket 0 00:04:39.216 EAL: Detected lcore 7 as core 10 on socket 0 00:04:39.216 EAL: Detected lcore 8 as core 11 on socket 0 00:04:39.216 EAL: Detected lcore 9 as core 16 on socket 0 00:04:39.216 EAL: Detected lcore 10 as core 17 on socket 0 00:04:39.216 EAL: Detected lcore 11 as core 18 on socket 0 00:04:39.216 EAL: Detected lcore 12 as core 19 on socket 0 00:04:39.216 EAL: Detected lcore 13 as core 20 on socket 0 00:04:39.216 EAL: Detected lcore 14 as core 24 on socket 0 00:04:39.216 EAL: Detected lcore 15 as core 25 on socket 0 00:04:39.216 EAL: Detected lcore 16 as core 26 on socket 0 00:04:39.216 EAL: Detected lcore 17 as core 27 on socket 0 00:04:39.216 EAL: Detected lcore 18 as core 0 on socket 1 00:04:39.216 EAL: Detected lcore 19 as core 1 on socket 1 00:04:39.216 EAL: Detected lcore 20 as core 2 on socket 1 00:04:39.216 EAL: Detected lcore 21 as core 3 on socket 1 00:04:39.216 EAL: Detected lcore 22 as core 4 on socket 1 00:04:39.216 EAL: Detected lcore 23 as core 8 on socket 1 00:04:39.216 EAL: Detected lcore 24 as core 9 on socket 1 00:04:39.216 EAL: Detected lcore 25 as core 10 on socket 1 00:04:39.216 EAL: Detected lcore 26 as core 11 on socket 1 00:04:39.216 EAL: Detected lcore 27 as core 16 on socket 1 00:04:39.216 EAL: Detected lcore 28 as core 17 on socket 1 00:04:39.216 EAL: Detected lcore 29 as core 18 on socket 1 00:04:39.216 EAL: Detected lcore 30 as core 19 on socket 1 00:04:39.216 EAL: Detected lcore 31 as core 20 on socket 1 00:04:39.216 EAL: Detected lcore 32 as core 24 on socket 1 00:04:39.216 EAL: Detected lcore 33 as core 25 on socket 1 00:04:39.216 EAL: Detected lcore 34 as core 26 on socket 1 00:04:39.216 EAL: Detected lcore 35 as core 27 on socket 1 00:04:39.216 EAL: Detected lcore 36 as core 0 on socket 0 00:04:39.216 EAL: Detected lcore 37 as core 1 on socket 0 00:04:39.216 EAL: Detected lcore 38 as core 2 on socket 0 00:04:39.216 EAL: Detected lcore 39 as core 3 on socket 0 00:04:39.216 EAL: Detected lcore 40 as core 4 on socket 0 00:04:39.216 EAL: Detected lcore 41 as core 8 on socket 0 00:04:39.216 EAL: Detected lcore 42 as core 9 on socket 0 00:04:39.216 EAL: Detected lcore 43 as core 10 on socket 0 00:04:39.216 EAL: Detected lcore 44 as core 11 on socket 0 00:04:39.216 EAL: Detected lcore 45 as core 16 on socket 0 00:04:39.216 EAL: Detected lcore 46 as core 17 on socket 0 00:04:39.216 EAL: Detected lcore 47 as core 18 on socket 0 00:04:39.216 EAL: Detected lcore 48 as core 19 on socket 0 00:04:39.216 EAL: Detected lcore 49 as core 20 on socket 0 00:04:39.216 EAL: Detected lcore 50 as core 24 on socket 0 00:04:39.216 EAL: Detected lcore 51 as core 25 on socket 0 00:04:39.216 EAL: Detected lcore 52 as core 26 on socket 0 00:04:39.216 EAL: Detected lcore 53 as core 27 on socket 0 00:04:39.216 EAL: Detected lcore 54 as core 0 on socket 1 00:04:39.216 EAL: Detected lcore 55 as core 1 on socket 1 00:04:39.216 EAL: Detected lcore 56 as core 2 on socket 1 00:04:39.216 EAL: Detected lcore 57 as core 3 on socket 1 00:04:39.216 EAL: Detected lcore 58 as core 4 on socket 1 00:04:39.216 EAL: Detected lcore 59 as core 8 on socket 1 00:04:39.216 EAL: Detected lcore 60 as core 9 on socket 1 00:04:39.216 EAL: Detected lcore 61 as core 10 on socket 1 00:04:39.216 EAL: Detected lcore 62 as core 11 on socket 1 00:04:39.216 EAL: Detected lcore 63 as core 16 on socket 1 00:04:39.216 EAL: Detected lcore 64 as core 17 on socket 1 00:04:39.216 EAL: Detected lcore 65 as core 18 on socket 1 00:04:39.216 EAL: Detected lcore 66 as core 19 on socket 1 00:04:39.216 EAL: Detected lcore 67 as core 20 on socket 1 00:04:39.216 EAL: Detected lcore 68 as core 24 on socket 1 00:04:39.216 EAL: Detected lcore 69 as core 25 on socket 1 00:04:39.216 EAL: Detected lcore 70 as core 26 on socket 1 00:04:39.216 EAL: Detected lcore 71 as core 27 on socket 1 00:04:39.216 EAL: Maximum logical cores by configuration: 128 00:04:39.216 EAL: Detected CPU lcores: 72 00:04:39.216 EAL: Detected NUMA nodes: 2 00:04:39.216 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:39.216 EAL: Checking presence of .so 'librte_eal.so.24' 00:04:39.216 EAL: Checking presence of .so 'librte_eal.so' 00:04:39.216 EAL: Detected static linkage of DPDK 00:04:39.216 EAL: No shared files mode enabled, IPC will be disabled 00:04:39.216 EAL: Bus pci wants IOVA as 'DC' 00:04:39.216 EAL: Buses did not request a specific IOVA mode. 00:04:39.216 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:39.216 EAL: Selected IOVA mode 'VA' 00:04:39.216 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.216 EAL: Probing VFIO support... 00:04:39.216 EAL: IOMMU type 1 (Type 1) is supported 00:04:39.216 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:39.216 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:39.216 EAL: VFIO support initialized 00:04:39.216 EAL: Ask a virtual area of 0x2e000 bytes 00:04:39.216 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:39.216 EAL: Setting up physically contiguous memory... 00:04:39.216 EAL: Setting maximum number of open files to 524288 00:04:39.216 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:39.216 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:39.216 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:39.216 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.216 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:39.216 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:39.216 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.216 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:39.216 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:39.216 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.216 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:39.216 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:39.216 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.216 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:39.216 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:39.216 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.216 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:39.216 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:39.216 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.217 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:39.217 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:39.217 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.217 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:39.217 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:39.217 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.217 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:39.217 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:39.217 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:39.217 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.217 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:39.217 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:39.217 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.217 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:39.217 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:39.217 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.217 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:39.217 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:39.217 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.217 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:39.217 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:39.217 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.217 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:39.217 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:39.217 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.217 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:39.217 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:39.217 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.217 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:39.217 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:39.217 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.217 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:39.217 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:39.217 EAL: Hugepages will be freed exactly as allocated. 00:04:39.217 EAL: No shared files mode enabled, IPC is disabled 00:04:39.217 EAL: No shared files mode enabled, IPC is disabled 00:04:39.217 EAL: TSC frequency is ~2300000 KHz 00:04:39.217 EAL: Main lcore 0 is ready (tid=7fe2c6b05a00;cpuset=[0]) 00:04:39.217 EAL: Trying to obtain current memory policy. 00:04:39.217 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.217 EAL: Restoring previous memory policy: 0 00:04:39.217 EAL: request: mp_malloc_sync 00:04:39.217 EAL: No shared files mode enabled, IPC is disabled 00:04:39.217 EAL: Heap on socket 0 was expanded by 2MB 00:04:39.217 EAL: No shared files mode enabled, IPC is disabled 00:04:39.217 EAL: Mem event callback 'spdk:(nil)' registered 00:04:39.217 00:04:39.217 00:04:39.217 CUnit - A unit testing framework for C - Version 2.1-3 00:04:39.217 http://cunit.sourceforge.net/ 00:04:39.217 00:04:39.217 00:04:39.217 Suite: components_suite 00:04:39.217 Test: vtophys_malloc_test ...passed 00:04:39.217 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:39.217 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.217 EAL: Restoring previous memory policy: 4 00:04:39.217 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.217 EAL: request: mp_malloc_sync 00:04:39.217 EAL: No shared files mode enabled, IPC is disabled 00:04:39.217 EAL: Heap on socket 0 was expanded by 4MB 00:04:39.217 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.217 EAL: request: mp_malloc_sync 00:04:39.217 EAL: No shared files mode enabled, IPC is disabled 00:04:39.217 EAL: Heap on socket 0 was shrunk by 4MB 00:04:39.217 EAL: Trying to obtain current memory policy. 00:04:39.217 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.217 EAL: Restoring previous memory policy: 4 00:04:39.217 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.217 EAL: request: mp_malloc_sync 00:04:39.217 EAL: No shared files mode enabled, IPC is disabled 00:04:39.217 EAL: Heap on socket 0 was expanded by 6MB 00:04:39.217 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.217 EAL: request: mp_malloc_sync 00:04:39.217 EAL: No shared files mode enabled, IPC is disabled 00:04:39.217 EAL: Heap on socket 0 was shrunk by 6MB 00:04:39.217 EAL: Trying to obtain current memory policy. 00:04:39.217 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.217 EAL: Restoring previous memory policy: 4 00:04:39.217 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.217 EAL: request: mp_malloc_sync 00:04:39.217 EAL: No shared files mode enabled, IPC is disabled 00:04:39.217 EAL: Heap on socket 0 was expanded by 10MB 00:04:39.217 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.217 EAL: request: mp_malloc_sync 00:04:39.217 EAL: No shared files mode enabled, IPC is disabled 00:04:39.217 EAL: Heap on socket 0 was shrunk by 10MB 00:04:39.217 EAL: Trying to obtain current memory policy. 00:04:39.217 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.217 EAL: Restoring previous memory policy: 4 00:04:39.217 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.217 EAL: request: mp_malloc_sync 00:04:39.217 EAL: No shared files mode enabled, IPC is disabled 00:04:39.217 EAL: Heap on socket 0 was expanded by 18MB 00:04:39.217 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.217 EAL: request: mp_malloc_sync 00:04:39.217 EAL: No shared files mode enabled, IPC is disabled 00:04:39.217 EAL: Heap on socket 0 was shrunk by 18MB 00:04:39.217 EAL: Trying to obtain current memory policy. 00:04:39.217 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.217 EAL: Restoring previous memory policy: 4 00:04:39.217 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.217 EAL: request: mp_malloc_sync 00:04:39.217 EAL: No shared files mode enabled, IPC is disabled 00:04:39.217 EAL: Heap on socket 0 was expanded by 34MB 00:04:39.217 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.217 EAL: request: mp_malloc_sync 00:04:39.217 EAL: No shared files mode enabled, IPC is disabled 00:04:39.217 EAL: Heap on socket 0 was shrunk by 34MB 00:04:39.217 EAL: Trying to obtain current memory policy. 00:04:39.217 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.217 EAL: Restoring previous memory policy: 4 00:04:39.217 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.217 EAL: request: mp_malloc_sync 00:04:39.217 EAL: No shared files mode enabled, IPC is disabled 00:04:39.217 EAL: Heap on socket 0 was expanded by 66MB 00:04:39.217 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.217 EAL: request: mp_malloc_sync 00:04:39.217 EAL: No shared files mode enabled, IPC is disabled 00:04:39.217 EAL: Heap on socket 0 was shrunk by 66MB 00:04:39.217 EAL: Trying to obtain current memory policy. 00:04:39.217 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.502 EAL: Restoring previous memory policy: 4 00:04:39.502 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.502 EAL: request: mp_malloc_sync 00:04:39.502 EAL: No shared files mode enabled, IPC is disabled 00:04:39.502 EAL: Heap on socket 0 was expanded by 130MB 00:04:39.502 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.502 EAL: request: mp_malloc_sync 00:04:39.502 EAL: No shared files mode enabled, IPC is disabled 00:04:39.502 EAL: Heap on socket 0 was shrunk by 130MB 00:04:39.502 EAL: Trying to obtain current memory policy. 00:04:39.502 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.502 EAL: Restoring previous memory policy: 4 00:04:39.502 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.502 EAL: request: mp_malloc_sync 00:04:39.502 EAL: No shared files mode enabled, IPC is disabled 00:04:39.502 EAL: Heap on socket 0 was expanded by 258MB 00:04:39.502 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.502 EAL: request: mp_malloc_sync 00:04:39.502 EAL: No shared files mode enabled, IPC is disabled 00:04:39.502 EAL: Heap on socket 0 was shrunk by 258MB 00:04:39.502 EAL: Trying to obtain current memory policy. 00:04:39.502 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.762 EAL: Restoring previous memory policy: 4 00:04:39.762 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.762 EAL: request: mp_malloc_sync 00:04:39.762 EAL: No shared files mode enabled, IPC is disabled 00:04:39.762 EAL: Heap on socket 0 was expanded by 514MB 00:04:39.762 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.762 EAL: request: mp_malloc_sync 00:04:39.762 EAL: No shared files mode enabled, IPC is disabled 00:04:39.762 EAL: Heap on socket 0 was shrunk by 514MB 00:04:39.762 EAL: Trying to obtain current memory policy. 00:04:39.762 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.022 EAL: Restoring previous memory policy: 4 00:04:40.022 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.022 EAL: request: mp_malloc_sync 00:04:40.022 EAL: No shared files mode enabled, IPC is disabled 00:04:40.022 EAL: Heap on socket 0 was expanded by 1026MB 00:04:40.282 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.542 EAL: request: mp_malloc_sync 00:04:40.542 EAL: No shared files mode enabled, IPC is disabled 00:04:40.542 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:40.542 passed 00:04:40.542 00:04:40.542 Run Summary: Type Total Ran Passed Failed Inactive 00:04:40.542 suites 1 1 n/a 0 0 00:04:40.542 tests 2 2 2 0 0 00:04:40.542 asserts 497 497 497 0 n/a 00:04:40.542 00:04:40.542 Elapsed time = 1.144 seconds 00:04:40.542 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.542 EAL: request: mp_malloc_sync 00:04:40.542 EAL: No shared files mode enabled, IPC is disabled 00:04:40.542 EAL: Heap on socket 0 was shrunk by 2MB 00:04:40.542 EAL: No shared files mode enabled, IPC is disabled 00:04:40.542 EAL: No shared files mode enabled, IPC is disabled 00:04:40.542 EAL: No shared files mode enabled, IPC is disabled 00:04:40.542 00:04:40.542 real 0m1.283s 00:04:40.542 user 0m0.740s 00:04:40.542 sys 0m0.519s 00:04:40.542 18:59:20 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.542 18:59:20 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:40.542 ************************************ 00:04:40.542 END TEST env_vtophys 00:04:40.542 ************************************ 00:04:40.542 18:59:20 env -- common/autotest_common.sh@1142 -- # return 0 00:04:40.542 18:59:20 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:40.542 18:59:20 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.542 18:59:20 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.542 18:59:20 env -- common/autotest_common.sh@10 -- # set +x 00:04:40.542 ************************************ 00:04:40.542 START TEST env_pci 00:04:40.542 ************************************ 00:04:40.542 18:59:20 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:40.542 00:04:40.542 00:04:40.542 CUnit - A unit testing framework for C - Version 2.1-3 00:04:40.542 http://cunit.sourceforge.net/ 00:04:40.542 00:04:40.542 00:04:40.542 Suite: pci 00:04:40.542 Test: pci_hook ...[2024-07-15 18:59:20.847591] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 651340 has claimed it 00:04:40.542 EAL: Cannot find device (10000:00:01.0) 00:04:40.542 EAL: Failed to attach device on primary process 00:04:40.542 passed 00:04:40.542 00:04:40.542 Run Summary: Type Total Ran Passed Failed Inactive 00:04:40.542 suites 1 1 n/a 0 0 00:04:40.542 tests 1 1 1 0 0 00:04:40.542 asserts 25 25 25 0 n/a 00:04:40.542 00:04:40.542 Elapsed time = 0.034 seconds 00:04:40.542 00:04:40.542 real 0m0.055s 00:04:40.542 user 0m0.014s 00:04:40.542 sys 0m0.041s 00:04:40.542 18:59:20 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.542 18:59:20 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:40.542 ************************************ 00:04:40.542 END TEST env_pci 00:04:40.542 ************************************ 00:04:40.542 18:59:20 env -- common/autotest_common.sh@1142 -- # return 0 00:04:40.542 18:59:20 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:40.542 18:59:20 env -- env/env.sh@15 -- # uname 00:04:40.542 18:59:20 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:40.542 18:59:20 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:40.542 18:59:20 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:40.542 18:59:20 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:40.542 18:59:20 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.542 18:59:20 env -- common/autotest_common.sh@10 -- # set +x 00:04:40.802 ************************************ 00:04:40.802 START TEST env_dpdk_post_init 00:04:40.802 ************************************ 00:04:40.802 18:59:20 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:40.802 EAL: Detected CPU lcores: 72 00:04:40.802 EAL: Detected NUMA nodes: 2 00:04:40.802 EAL: Detected static linkage of DPDK 00:04:40.802 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:40.802 EAL: Selected IOVA mode 'VA' 00:04:40.802 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.802 EAL: VFIO support initialized 00:04:40.802 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:40.802 EAL: Using IOMMU type 1 (Type 1) 00:04:41.061 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:5e:00.0 (socket 0) 00:04:41.321 EAL: Probe PCI driver: spdk_nvme (8086:2701) device: 0000:af:00.0 (socket 1) 00:04:41.580 EAL: Probe PCI driver: spdk_nvme (8086:2701) device: 0000:b0:00.0 (socket 1) 00:04:41.580 EAL: Releasing PCI mapped resource for 0000:af:00.0 00:04:41.580 EAL: Calling pci_unmap_resource for 0000:af:00.0 at 0x202001004000 00:04:41.580 EAL: Releasing PCI mapped resource for 0000:b0:00.0 00:04:41.580 EAL: Calling pci_unmap_resource for 0000:b0:00.0 at 0x202001008000 00:04:41.839 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:41.839 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001000000 00:04:41.839 Starting DPDK initialization... 00:04:41.839 Starting SPDK post initialization... 00:04:41.839 SPDK NVMe probe 00:04:41.839 Attaching to 0000:5e:00.0 00:04:41.839 Attaching to 0000:af:00.0 00:04:41.839 Attaching to 0000:b0:00.0 00:04:41.839 Attached to 0000:af:00.0 00:04:41.839 Attached to 0000:b0:00.0 00:04:41.839 Attached to 0000:5e:00.0 00:04:41.839 Cleaning up... 00:04:41.839 00:04:41.839 real 0m1.144s 00:04:41.839 user 0m0.359s 00:04:41.839 sys 0m0.108s 00:04:41.839 18:59:22 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.839 18:59:22 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:41.839 ************************************ 00:04:41.839 END TEST env_dpdk_post_init 00:04:41.839 ************************************ 00:04:41.839 18:59:22 env -- common/autotest_common.sh@1142 -- # return 0 00:04:41.839 18:59:22 env -- env/env.sh@26 -- # uname 00:04:41.839 18:59:22 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:41.839 18:59:22 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:41.839 18:59:22 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.839 18:59:22 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.839 18:59:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.839 ************************************ 00:04:41.839 START TEST env_mem_callbacks 00:04:41.839 ************************************ 00:04:41.839 18:59:22 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:41.839 EAL: Detected CPU lcores: 72 00:04:41.839 EAL: Detected NUMA nodes: 2 00:04:41.839 EAL: Detected static linkage of DPDK 00:04:41.839 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:41.839 EAL: Selected IOVA mode 'VA' 00:04:41.839 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.839 EAL: VFIO support initialized 00:04:42.098 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:42.098 00:04:42.098 00:04:42.098 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.098 http://cunit.sourceforge.net/ 00:04:42.098 00:04:42.098 00:04:42.098 Suite: memory 00:04:42.098 Test: test ... 00:04:42.098 register 0x200000200000 2097152 00:04:42.098 malloc 3145728 00:04:42.098 register 0x200000400000 4194304 00:04:42.098 buf 0x200000500000 len 3145728 PASSED 00:04:42.098 malloc 64 00:04:42.098 buf 0x2000004fff40 len 64 PASSED 00:04:42.098 malloc 4194304 00:04:42.098 register 0x200000800000 6291456 00:04:42.098 buf 0x200000a00000 len 4194304 PASSED 00:04:42.098 free 0x200000500000 3145728 00:04:42.098 free 0x2000004fff40 64 00:04:42.098 unregister 0x200000400000 4194304 PASSED 00:04:42.098 free 0x200000a00000 4194304 00:04:42.098 unregister 0x200000800000 6291456 PASSED 00:04:42.098 malloc 8388608 00:04:42.098 register 0x200000400000 10485760 00:04:42.099 buf 0x200000600000 len 8388608 PASSED 00:04:42.099 free 0x200000600000 8388608 00:04:42.099 unregister 0x200000400000 10485760 PASSED 00:04:42.099 passed 00:04:42.099 00:04:42.099 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.099 suites 1 1 n/a 0 0 00:04:42.099 tests 1 1 1 0 0 00:04:42.099 asserts 15 15 15 0 n/a 00:04:42.099 00:04:42.099 Elapsed time = 0.009 seconds 00:04:42.099 00:04:42.099 real 0m0.072s 00:04:42.099 user 0m0.015s 00:04:42.099 sys 0m0.057s 00:04:42.099 18:59:22 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.099 18:59:22 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:42.099 ************************************ 00:04:42.099 END TEST env_mem_callbacks 00:04:42.099 ************************************ 00:04:42.099 18:59:22 env -- common/autotest_common.sh@1142 -- # return 0 00:04:42.099 00:04:42.099 real 0m3.209s 00:04:42.099 user 0m1.409s 00:04:42.099 sys 0m1.138s 00:04:42.099 18:59:22 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.099 18:59:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.099 ************************************ 00:04:42.099 END TEST env 00:04:42.099 ************************************ 00:04:42.099 18:59:22 -- common/autotest_common.sh@1142 -- # return 0 00:04:42.099 18:59:22 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:42.099 18:59:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.099 18:59:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.099 18:59:22 -- common/autotest_common.sh@10 -- # set +x 00:04:42.099 ************************************ 00:04:42.099 START TEST rpc 00:04:42.099 ************************************ 00:04:42.099 18:59:22 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:42.099 * Looking for test storage... 00:04:42.099 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:42.358 18:59:22 rpc -- rpc/rpc.sh@65 -- # spdk_pid=651780 00:04:42.358 18:59:22 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:42.358 18:59:22 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.358 18:59:22 rpc -- rpc/rpc.sh@67 -- # waitforlisten 651780 00:04:42.358 18:59:22 rpc -- common/autotest_common.sh@829 -- # '[' -z 651780 ']' 00:04:42.358 18:59:22 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.358 18:59:22 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:42.358 18:59:22 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.358 18:59:22 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:42.358 18:59:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.358 [2024-07-15 18:59:22.554504] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:04:42.358 [2024-07-15 18:59:22.554605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid651780 ] 00:04:42.358 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.358 [2024-07-15 18:59:22.638373] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.358 [2024-07-15 18:59:22.719513] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:42.359 [2024-07-15 18:59:22.719552] app.c: 607:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 651780' to capture a snapshot of events at runtime. 00:04:42.359 [2024-07-15 18:59:22.719562] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:42.359 [2024-07-15 18:59:22.719570] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:42.359 [2024-07-15 18:59:22.719578] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid651780 for offline analysis/debug. 00:04:42.359 [2024-07-15 18:59:22.719601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.297 18:59:23 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:43.297 18:59:23 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:43.297 18:59:23 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:43.297 18:59:23 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:43.297 18:59:23 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:43.297 18:59:23 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:43.297 18:59:23 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.297 18:59:23 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.297 18:59:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.297 ************************************ 00:04:43.297 START TEST rpc_integrity 00:04:43.297 ************************************ 00:04:43.297 18:59:23 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:43.297 18:59:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:43.297 18:59:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.297 18:59:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.297 18:59:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.297 18:59:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:43.297 18:59:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:43.297 18:59:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:43.297 18:59:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:43.297 18:59:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.297 18:59:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.297 18:59:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.297 18:59:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:43.297 18:59:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:43.297 18:59:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.297 18:59:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.297 18:59:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.297 18:59:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:43.297 { 00:04:43.297 "name": "Malloc0", 00:04:43.297 "aliases": [ 00:04:43.297 "a6d3823e-c15a-446f-9e9c-323d5c84979a" 00:04:43.297 ], 00:04:43.297 "product_name": "Malloc disk", 00:04:43.297 "block_size": 512, 00:04:43.297 "num_blocks": 16384, 00:04:43.297 "uuid": "a6d3823e-c15a-446f-9e9c-323d5c84979a", 00:04:43.297 "assigned_rate_limits": { 00:04:43.297 "rw_ios_per_sec": 0, 00:04:43.297 "rw_mbytes_per_sec": 0, 00:04:43.297 "r_mbytes_per_sec": 0, 00:04:43.297 "w_mbytes_per_sec": 0 00:04:43.297 }, 00:04:43.297 "claimed": false, 00:04:43.297 "zoned": false, 00:04:43.297 "supported_io_types": { 00:04:43.297 "read": true, 00:04:43.297 "write": true, 00:04:43.297 "unmap": true, 00:04:43.297 "flush": true, 00:04:43.297 "reset": true, 00:04:43.297 "nvme_admin": false, 00:04:43.297 "nvme_io": false, 00:04:43.297 "nvme_io_md": false, 00:04:43.297 "write_zeroes": true, 00:04:43.297 "zcopy": true, 00:04:43.297 "get_zone_info": false, 00:04:43.297 "zone_management": false, 00:04:43.297 "zone_append": false, 00:04:43.297 "compare": false, 00:04:43.297 "compare_and_write": false, 00:04:43.297 "abort": true, 00:04:43.297 "seek_hole": false, 00:04:43.297 "seek_data": false, 00:04:43.297 "copy": true, 00:04:43.297 "nvme_iov_md": false 00:04:43.297 }, 00:04:43.297 "memory_domains": [ 00:04:43.297 { 00:04:43.297 "dma_device_id": "system", 00:04:43.297 "dma_device_type": 1 00:04:43.297 }, 00:04:43.297 { 00:04:43.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.297 "dma_device_type": 2 00:04:43.297 } 00:04:43.297 ], 00:04:43.297 "driver_specific": {} 00:04:43.297 } 00:04:43.297 ]' 00:04:43.297 18:59:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:43.297 18:59:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:43.297 18:59:23 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:43.297 18:59:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.297 18:59:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.297 [2024-07-15 18:59:23.545405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:43.297 [2024-07-15 18:59:23.545437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:43.297 [2024-07-15 18:59:23.545452] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5a400e0 00:04:43.297 [2024-07-15 18:59:23.545462] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:43.297 [2024-07-15 18:59:23.546415] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:43.297 [2024-07-15 18:59:23.546437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:43.297 Passthru0 00:04:43.297 18:59:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.297 18:59:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:43.297 18:59:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.297 18:59:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.297 18:59:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.297 18:59:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:43.297 { 00:04:43.297 "name": "Malloc0", 00:04:43.297 "aliases": [ 00:04:43.297 "a6d3823e-c15a-446f-9e9c-323d5c84979a" 00:04:43.297 ], 00:04:43.297 "product_name": "Malloc disk", 00:04:43.297 "block_size": 512, 00:04:43.297 "num_blocks": 16384, 00:04:43.297 "uuid": "a6d3823e-c15a-446f-9e9c-323d5c84979a", 00:04:43.297 "assigned_rate_limits": { 00:04:43.297 "rw_ios_per_sec": 0, 00:04:43.297 "rw_mbytes_per_sec": 0, 00:04:43.297 "r_mbytes_per_sec": 0, 00:04:43.297 "w_mbytes_per_sec": 0 00:04:43.297 }, 00:04:43.297 "claimed": true, 00:04:43.297 "claim_type": "exclusive_write", 00:04:43.297 "zoned": false, 00:04:43.297 "supported_io_types": { 00:04:43.297 "read": true, 00:04:43.297 "write": true, 00:04:43.297 "unmap": true, 00:04:43.297 "flush": true, 00:04:43.297 "reset": true, 00:04:43.297 "nvme_admin": false, 00:04:43.297 "nvme_io": false, 00:04:43.297 "nvme_io_md": false, 00:04:43.297 "write_zeroes": true, 00:04:43.297 "zcopy": true, 00:04:43.297 "get_zone_info": false, 00:04:43.297 "zone_management": false, 00:04:43.297 "zone_append": false, 00:04:43.297 "compare": false, 00:04:43.297 "compare_and_write": false, 00:04:43.297 "abort": true, 00:04:43.297 "seek_hole": false, 00:04:43.297 "seek_data": false, 00:04:43.297 "copy": true, 00:04:43.298 "nvme_iov_md": false 00:04:43.298 }, 00:04:43.298 "memory_domains": [ 00:04:43.298 { 00:04:43.298 "dma_device_id": "system", 00:04:43.298 "dma_device_type": 1 00:04:43.298 }, 00:04:43.298 { 00:04:43.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.298 "dma_device_type": 2 00:04:43.298 } 00:04:43.298 ], 00:04:43.298 "driver_specific": {} 00:04:43.298 }, 00:04:43.298 { 00:04:43.298 "name": "Passthru0", 00:04:43.298 "aliases": [ 00:04:43.298 "61a0548e-fe30-5641-930f-689109270de0" 00:04:43.298 ], 00:04:43.298 "product_name": "passthru", 00:04:43.298 "block_size": 512, 00:04:43.298 "num_blocks": 16384, 00:04:43.298 "uuid": "61a0548e-fe30-5641-930f-689109270de0", 00:04:43.298 "assigned_rate_limits": { 00:04:43.298 "rw_ios_per_sec": 0, 00:04:43.298 "rw_mbytes_per_sec": 0, 00:04:43.298 "r_mbytes_per_sec": 0, 00:04:43.298 "w_mbytes_per_sec": 0 00:04:43.298 }, 00:04:43.298 "claimed": false, 00:04:43.298 "zoned": false, 00:04:43.298 "supported_io_types": { 00:04:43.298 "read": true, 00:04:43.298 "write": true, 00:04:43.298 "unmap": true, 00:04:43.298 "flush": true, 00:04:43.298 "reset": true, 00:04:43.298 "nvme_admin": false, 00:04:43.298 "nvme_io": false, 00:04:43.298 "nvme_io_md": false, 00:04:43.298 "write_zeroes": true, 00:04:43.298 "zcopy": true, 00:04:43.298 "get_zone_info": false, 00:04:43.298 "zone_management": false, 00:04:43.298 "zone_append": false, 00:04:43.298 "compare": false, 00:04:43.298 "compare_and_write": false, 00:04:43.298 "abort": true, 00:04:43.298 "seek_hole": false, 00:04:43.298 "seek_data": false, 00:04:43.298 "copy": true, 00:04:43.298 "nvme_iov_md": false 00:04:43.298 }, 00:04:43.298 "memory_domains": [ 00:04:43.298 { 00:04:43.298 "dma_device_id": "system", 00:04:43.298 "dma_device_type": 1 00:04:43.298 }, 00:04:43.298 { 00:04:43.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.298 "dma_device_type": 2 00:04:43.298 } 00:04:43.298 ], 00:04:43.298 "driver_specific": { 00:04:43.298 "passthru": { 00:04:43.298 "name": "Passthru0", 00:04:43.298 "base_bdev_name": "Malloc0" 00:04:43.298 } 00:04:43.298 } 00:04:43.298 } 00:04:43.298 ]' 00:04:43.298 18:59:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:43.298 18:59:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:43.298 18:59:23 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:43.298 18:59:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.298 18:59:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.298 18:59:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.298 18:59:23 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:43.298 18:59:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.298 18:59:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.298 18:59:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.298 18:59:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:43.298 18:59:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.298 18:59:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.298 18:59:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.298 18:59:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:43.298 18:59:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:43.298 18:59:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:43.298 00:04:43.298 real 0m0.297s 00:04:43.298 user 0m0.182s 00:04:43.298 sys 0m0.052s 00:04:43.298 18:59:23 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.298 18:59:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.298 ************************************ 00:04:43.298 END TEST rpc_integrity 00:04:43.298 ************************************ 00:04:43.557 18:59:23 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:43.557 18:59:23 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:43.557 18:59:23 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.557 18:59:23 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.557 18:59:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.557 ************************************ 00:04:43.557 START TEST rpc_plugins 00:04:43.557 ************************************ 00:04:43.557 18:59:23 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:43.557 18:59:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:43.557 18:59:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.557 18:59:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.557 18:59:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.557 18:59:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:43.557 18:59:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:43.557 18:59:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.557 18:59:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.557 18:59:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.557 18:59:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:43.557 { 00:04:43.557 "name": "Malloc1", 00:04:43.557 "aliases": [ 00:04:43.557 "dfd61cdb-9f0d-4dff-9ecd-8764d2954eea" 00:04:43.557 ], 00:04:43.557 "product_name": "Malloc disk", 00:04:43.557 "block_size": 4096, 00:04:43.557 "num_blocks": 256, 00:04:43.557 "uuid": "dfd61cdb-9f0d-4dff-9ecd-8764d2954eea", 00:04:43.557 "assigned_rate_limits": { 00:04:43.557 "rw_ios_per_sec": 0, 00:04:43.557 "rw_mbytes_per_sec": 0, 00:04:43.557 "r_mbytes_per_sec": 0, 00:04:43.557 "w_mbytes_per_sec": 0 00:04:43.557 }, 00:04:43.557 "claimed": false, 00:04:43.557 "zoned": false, 00:04:43.557 "supported_io_types": { 00:04:43.557 "read": true, 00:04:43.557 "write": true, 00:04:43.557 "unmap": true, 00:04:43.557 "flush": true, 00:04:43.557 "reset": true, 00:04:43.557 "nvme_admin": false, 00:04:43.557 "nvme_io": false, 00:04:43.557 "nvme_io_md": false, 00:04:43.557 "write_zeroes": true, 00:04:43.557 "zcopy": true, 00:04:43.557 "get_zone_info": false, 00:04:43.557 "zone_management": false, 00:04:43.557 "zone_append": false, 00:04:43.557 "compare": false, 00:04:43.557 "compare_and_write": false, 00:04:43.557 "abort": true, 00:04:43.557 "seek_hole": false, 00:04:43.557 "seek_data": false, 00:04:43.557 "copy": true, 00:04:43.557 "nvme_iov_md": false 00:04:43.557 }, 00:04:43.557 "memory_domains": [ 00:04:43.557 { 00:04:43.557 "dma_device_id": "system", 00:04:43.557 "dma_device_type": 1 00:04:43.558 }, 00:04:43.558 { 00:04:43.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.558 "dma_device_type": 2 00:04:43.558 } 00:04:43.558 ], 00:04:43.558 "driver_specific": {} 00:04:43.558 } 00:04:43.558 ]' 00:04:43.558 18:59:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:43.558 18:59:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:43.558 18:59:23 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:43.558 18:59:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.558 18:59:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.558 18:59:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.558 18:59:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:43.558 18:59:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.558 18:59:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.558 18:59:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.558 18:59:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:43.558 18:59:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:43.558 18:59:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:43.558 00:04:43.558 real 0m0.155s 00:04:43.558 user 0m0.087s 00:04:43.558 sys 0m0.031s 00:04:43.558 18:59:23 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.558 18:59:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.558 ************************************ 00:04:43.558 END TEST rpc_plugins 00:04:43.558 ************************************ 00:04:43.558 18:59:23 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:43.558 18:59:23 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:43.558 18:59:23 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.558 18:59:23 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.558 18:59:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.817 ************************************ 00:04:43.817 START TEST rpc_trace_cmd_test 00:04:43.817 ************************************ 00:04:43.817 18:59:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:43.817 18:59:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:43.817 18:59:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:43.817 18:59:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.817 18:59:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:43.817 18:59:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.817 18:59:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:43.817 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid651780", 00:04:43.817 "tpoint_group_mask": "0x8", 00:04:43.817 "iscsi_conn": { 00:04:43.817 "mask": "0x2", 00:04:43.817 "tpoint_mask": "0x0" 00:04:43.817 }, 00:04:43.817 "scsi": { 00:04:43.817 "mask": "0x4", 00:04:43.817 "tpoint_mask": "0x0" 00:04:43.817 }, 00:04:43.817 "bdev": { 00:04:43.817 "mask": "0x8", 00:04:43.817 "tpoint_mask": "0xffffffffffffffff" 00:04:43.817 }, 00:04:43.817 "nvmf_rdma": { 00:04:43.817 "mask": "0x10", 00:04:43.817 "tpoint_mask": "0x0" 00:04:43.817 }, 00:04:43.817 "nvmf_tcp": { 00:04:43.817 "mask": "0x20", 00:04:43.817 "tpoint_mask": "0x0" 00:04:43.817 }, 00:04:43.817 "ftl": { 00:04:43.817 "mask": "0x40", 00:04:43.817 "tpoint_mask": "0x0" 00:04:43.817 }, 00:04:43.817 "blobfs": { 00:04:43.817 "mask": "0x80", 00:04:43.817 "tpoint_mask": "0x0" 00:04:43.817 }, 00:04:43.817 "dsa": { 00:04:43.817 "mask": "0x200", 00:04:43.817 "tpoint_mask": "0x0" 00:04:43.817 }, 00:04:43.817 "thread": { 00:04:43.817 "mask": "0x400", 00:04:43.817 "tpoint_mask": "0x0" 00:04:43.817 }, 00:04:43.817 "nvme_pcie": { 00:04:43.817 "mask": "0x800", 00:04:43.817 "tpoint_mask": "0x0" 00:04:43.817 }, 00:04:43.817 "iaa": { 00:04:43.817 "mask": "0x1000", 00:04:43.817 "tpoint_mask": "0x0" 00:04:43.817 }, 00:04:43.817 "nvme_tcp": { 00:04:43.817 "mask": "0x2000", 00:04:43.817 "tpoint_mask": "0x0" 00:04:43.817 }, 00:04:43.817 "bdev_nvme": { 00:04:43.817 "mask": "0x4000", 00:04:43.817 "tpoint_mask": "0x0" 00:04:43.817 }, 00:04:43.817 "sock": { 00:04:43.817 "mask": "0x8000", 00:04:43.817 "tpoint_mask": "0x0" 00:04:43.817 } 00:04:43.817 }' 00:04:43.817 18:59:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:43.817 18:59:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:43.817 18:59:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:43.817 18:59:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:43.817 18:59:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:43.817 18:59:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:43.817 18:59:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:43.817 18:59:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:43.818 18:59:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:43.818 18:59:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:43.818 00:04:43.818 real 0m0.217s 00:04:43.818 user 0m0.180s 00:04:43.818 sys 0m0.031s 00:04:43.818 18:59:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.818 18:59:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:43.818 ************************************ 00:04:43.818 END TEST rpc_trace_cmd_test 00:04:43.818 ************************************ 00:04:44.078 18:59:24 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:44.078 18:59:24 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:44.078 18:59:24 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:44.078 18:59:24 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:44.078 18:59:24 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.078 18:59:24 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.078 18:59:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.078 ************************************ 00:04:44.078 START TEST rpc_daemon_integrity 00:04:44.078 ************************************ 00:04:44.078 18:59:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:44.078 18:59:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:44.078 18:59:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.078 18:59:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.078 18:59:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.078 18:59:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:44.078 18:59:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:44.078 18:59:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:44.078 18:59:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:44.078 18:59:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.078 18:59:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.078 18:59:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.078 18:59:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:44.078 18:59:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:44.078 18:59:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.078 18:59:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.078 18:59:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.078 18:59:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:44.078 { 00:04:44.078 "name": "Malloc2", 00:04:44.078 "aliases": [ 00:04:44.078 "4dcc9437-fa78-47d1-8ed1-5ec6fac01699" 00:04:44.078 ], 00:04:44.078 "product_name": "Malloc disk", 00:04:44.078 "block_size": 512, 00:04:44.078 "num_blocks": 16384, 00:04:44.078 "uuid": "4dcc9437-fa78-47d1-8ed1-5ec6fac01699", 00:04:44.078 "assigned_rate_limits": { 00:04:44.078 "rw_ios_per_sec": 0, 00:04:44.078 "rw_mbytes_per_sec": 0, 00:04:44.078 "r_mbytes_per_sec": 0, 00:04:44.078 "w_mbytes_per_sec": 0 00:04:44.078 }, 00:04:44.078 "claimed": false, 00:04:44.078 "zoned": false, 00:04:44.078 "supported_io_types": { 00:04:44.078 "read": true, 00:04:44.078 "write": true, 00:04:44.078 "unmap": true, 00:04:44.078 "flush": true, 00:04:44.078 "reset": true, 00:04:44.078 "nvme_admin": false, 00:04:44.078 "nvme_io": false, 00:04:44.078 "nvme_io_md": false, 00:04:44.078 "write_zeroes": true, 00:04:44.078 "zcopy": true, 00:04:44.078 "get_zone_info": false, 00:04:44.078 "zone_management": false, 00:04:44.078 "zone_append": false, 00:04:44.078 "compare": false, 00:04:44.078 "compare_and_write": false, 00:04:44.078 "abort": true, 00:04:44.078 "seek_hole": false, 00:04:44.078 "seek_data": false, 00:04:44.078 "copy": true, 00:04:44.078 "nvme_iov_md": false 00:04:44.078 }, 00:04:44.078 "memory_domains": [ 00:04:44.078 { 00:04:44.078 "dma_device_id": "system", 00:04:44.078 "dma_device_type": 1 00:04:44.078 }, 00:04:44.078 { 00:04:44.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.078 "dma_device_type": 2 00:04:44.078 } 00:04:44.078 ], 00:04:44.078 "driver_specific": {} 00:04:44.078 } 00:04:44.078 ]' 00:04:44.078 18:59:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:44.078 18:59:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:44.078 18:59:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:44.078 18:59:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.078 18:59:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.078 [2024-07-15 18:59:24.479780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:44.078 [2024-07-15 18:59:24.479812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:44.078 [2024-07-15 18:59:24.479842] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5a40310 00:04:44.078 [2024-07-15 18:59:24.479851] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:44.078 [2024-07-15 18:59:24.480576] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:44.078 [2024-07-15 18:59:24.480597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:44.078 Passthru0 00:04:44.078 18:59:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.078 18:59:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:44.078 18:59:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.078 18:59:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.338 18:59:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.338 18:59:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:44.338 { 00:04:44.338 "name": "Malloc2", 00:04:44.338 "aliases": [ 00:04:44.338 "4dcc9437-fa78-47d1-8ed1-5ec6fac01699" 00:04:44.338 ], 00:04:44.338 "product_name": "Malloc disk", 00:04:44.338 "block_size": 512, 00:04:44.338 "num_blocks": 16384, 00:04:44.338 "uuid": "4dcc9437-fa78-47d1-8ed1-5ec6fac01699", 00:04:44.338 "assigned_rate_limits": { 00:04:44.338 "rw_ios_per_sec": 0, 00:04:44.338 "rw_mbytes_per_sec": 0, 00:04:44.338 "r_mbytes_per_sec": 0, 00:04:44.338 "w_mbytes_per_sec": 0 00:04:44.338 }, 00:04:44.338 "claimed": true, 00:04:44.338 "claim_type": "exclusive_write", 00:04:44.338 "zoned": false, 00:04:44.338 "supported_io_types": { 00:04:44.338 "read": true, 00:04:44.338 "write": true, 00:04:44.338 "unmap": true, 00:04:44.338 "flush": true, 00:04:44.338 "reset": true, 00:04:44.338 "nvme_admin": false, 00:04:44.338 "nvme_io": false, 00:04:44.338 "nvme_io_md": false, 00:04:44.338 "write_zeroes": true, 00:04:44.338 "zcopy": true, 00:04:44.338 "get_zone_info": false, 00:04:44.338 "zone_management": false, 00:04:44.338 "zone_append": false, 00:04:44.338 "compare": false, 00:04:44.338 "compare_and_write": false, 00:04:44.338 "abort": true, 00:04:44.338 "seek_hole": false, 00:04:44.338 "seek_data": false, 00:04:44.338 "copy": true, 00:04:44.339 "nvme_iov_md": false 00:04:44.339 }, 00:04:44.339 "memory_domains": [ 00:04:44.339 { 00:04:44.339 "dma_device_id": "system", 00:04:44.339 "dma_device_type": 1 00:04:44.339 }, 00:04:44.339 { 00:04:44.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.339 "dma_device_type": 2 00:04:44.339 } 00:04:44.339 ], 00:04:44.339 "driver_specific": {} 00:04:44.339 }, 00:04:44.339 { 00:04:44.339 "name": "Passthru0", 00:04:44.339 "aliases": [ 00:04:44.339 "bd001752-eb18-5cf1-b354-3224f962b8ad" 00:04:44.339 ], 00:04:44.339 "product_name": "passthru", 00:04:44.339 "block_size": 512, 00:04:44.339 "num_blocks": 16384, 00:04:44.339 "uuid": "bd001752-eb18-5cf1-b354-3224f962b8ad", 00:04:44.339 "assigned_rate_limits": { 00:04:44.339 "rw_ios_per_sec": 0, 00:04:44.339 "rw_mbytes_per_sec": 0, 00:04:44.339 "r_mbytes_per_sec": 0, 00:04:44.339 "w_mbytes_per_sec": 0 00:04:44.339 }, 00:04:44.339 "claimed": false, 00:04:44.339 "zoned": false, 00:04:44.339 "supported_io_types": { 00:04:44.339 "read": true, 00:04:44.339 "write": true, 00:04:44.339 "unmap": true, 00:04:44.339 "flush": true, 00:04:44.339 "reset": true, 00:04:44.339 "nvme_admin": false, 00:04:44.339 "nvme_io": false, 00:04:44.339 "nvme_io_md": false, 00:04:44.339 "write_zeroes": true, 00:04:44.339 "zcopy": true, 00:04:44.339 "get_zone_info": false, 00:04:44.339 "zone_management": false, 00:04:44.339 "zone_append": false, 00:04:44.339 "compare": false, 00:04:44.339 "compare_and_write": false, 00:04:44.339 "abort": true, 00:04:44.339 "seek_hole": false, 00:04:44.339 "seek_data": false, 00:04:44.339 "copy": true, 00:04:44.339 "nvme_iov_md": false 00:04:44.339 }, 00:04:44.339 "memory_domains": [ 00:04:44.339 { 00:04:44.339 "dma_device_id": "system", 00:04:44.339 "dma_device_type": 1 00:04:44.339 }, 00:04:44.339 { 00:04:44.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.339 "dma_device_type": 2 00:04:44.339 } 00:04:44.339 ], 00:04:44.339 "driver_specific": { 00:04:44.339 "passthru": { 00:04:44.339 "name": "Passthru0", 00:04:44.339 "base_bdev_name": "Malloc2" 00:04:44.339 } 00:04:44.339 } 00:04:44.339 } 00:04:44.339 ]' 00:04:44.339 18:59:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:44.339 18:59:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:44.339 18:59:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:44.339 18:59:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.339 18:59:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.339 18:59:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.339 18:59:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:44.339 18:59:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.339 18:59:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.339 18:59:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.339 18:59:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:44.339 18:59:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.339 18:59:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.339 18:59:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.339 18:59:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:44.339 18:59:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:44.339 18:59:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:44.339 00:04:44.339 real 0m0.305s 00:04:44.339 user 0m0.181s 00:04:44.339 sys 0m0.060s 00:04:44.339 18:59:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.339 18:59:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.339 ************************************ 00:04:44.339 END TEST rpc_daemon_integrity 00:04:44.339 ************************************ 00:04:44.339 18:59:24 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:44.339 18:59:24 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:44.339 18:59:24 rpc -- rpc/rpc.sh@84 -- # killprocess 651780 00:04:44.339 18:59:24 rpc -- common/autotest_common.sh@948 -- # '[' -z 651780 ']' 00:04:44.339 18:59:24 rpc -- common/autotest_common.sh@952 -- # kill -0 651780 00:04:44.339 18:59:24 rpc -- common/autotest_common.sh@953 -- # uname 00:04:44.339 18:59:24 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:44.339 18:59:24 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 651780 00:04:44.339 18:59:24 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:44.339 18:59:24 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:44.339 18:59:24 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 651780' 00:04:44.339 killing process with pid 651780 00:04:44.339 18:59:24 rpc -- common/autotest_common.sh@967 -- # kill 651780 00:04:44.339 18:59:24 rpc -- common/autotest_common.sh@972 -- # wait 651780 00:04:44.908 00:04:44.908 real 0m2.657s 00:04:44.908 user 0m3.320s 00:04:44.908 sys 0m0.875s 00:04:44.908 18:59:25 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.908 18:59:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.908 ************************************ 00:04:44.908 END TEST rpc 00:04:44.908 ************************************ 00:04:44.908 18:59:25 -- common/autotest_common.sh@1142 -- # return 0 00:04:44.908 18:59:25 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:44.908 18:59:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.908 18:59:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.908 18:59:25 -- common/autotest_common.sh@10 -- # set +x 00:04:44.908 ************************************ 00:04:44.908 START TEST skip_rpc 00:04:44.908 ************************************ 00:04:44.908 18:59:25 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:44.908 * Looking for test storage... 00:04:44.908 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:44.908 18:59:25 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:44.908 18:59:25 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:44.908 18:59:25 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:44.908 18:59:25 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.908 18:59:25 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.908 18:59:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.908 ************************************ 00:04:44.908 START TEST skip_rpc 00:04:44.908 ************************************ 00:04:44.908 18:59:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:44.908 18:59:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=652319 00:04:44.908 18:59:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.908 18:59:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:44.908 18:59:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:44.908 [2024-07-15 18:59:25.333147] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:04:44.908 [2024-07-15 18:59:25.333213] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid652319 ] 00:04:45.167 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.167 [2024-07-15 18:59:25.418814] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.167 [2024-07-15 18:59:25.503917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.441 18:59:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:50.441 18:59:30 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:50.441 18:59:30 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:50.441 18:59:30 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:50.441 18:59:30 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.441 18:59:30 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:50.441 18:59:30 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.441 18:59:30 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:50.441 18:59:30 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.441 18:59:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.441 18:59:30 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:50.441 18:59:30 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:50.441 18:59:30 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:50.441 18:59:30 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:50.441 18:59:30 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:50.441 18:59:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:50.441 18:59:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 652319 00:04:50.441 18:59:30 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 652319 ']' 00:04:50.441 18:59:30 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 652319 00:04:50.441 18:59:30 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:50.441 18:59:30 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:50.441 18:59:30 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 652319 00:04:50.441 18:59:30 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:50.441 18:59:30 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:50.441 18:59:30 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 652319' 00:04:50.441 killing process with pid 652319 00:04:50.441 18:59:30 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 652319 00:04:50.441 18:59:30 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 652319 00:04:50.441 00:04:50.441 real 0m5.405s 00:04:50.441 user 0m5.139s 00:04:50.441 sys 0m0.297s 00:04:50.441 18:59:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.441 18:59:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.441 ************************************ 00:04:50.441 END TEST skip_rpc 00:04:50.441 ************************************ 00:04:50.441 18:59:30 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:50.441 18:59:30 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:50.441 18:59:30 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.441 18:59:30 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.441 18:59:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.441 ************************************ 00:04:50.441 START TEST skip_rpc_with_json 00:04:50.441 ************************************ 00:04:50.441 18:59:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:50.441 18:59:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:50.441 18:59:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=653057 00:04:50.441 18:59:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.441 18:59:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.441 18:59:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 653057 00:04:50.441 18:59:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 653057 ']' 00:04:50.441 18:59:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.441 18:59:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:50.441 18:59:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.441 18:59:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:50.441 18:59:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:50.441 [2024-07-15 18:59:30.825822] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:04:50.442 [2024-07-15 18:59:30.825905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid653057 ] 00:04:50.442 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.701 [2024-07-15 18:59:30.910447] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.701 [2024-07-15 18:59:30.999982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.269 18:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.269 18:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:51.269 18:59:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:51.269 18:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.269 18:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.269 [2024-07-15 18:59:31.657465] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:51.269 request: 00:04:51.269 { 00:04:51.269 "trtype": "tcp", 00:04:51.269 "method": "nvmf_get_transports", 00:04:51.269 "req_id": 1 00:04:51.269 } 00:04:51.269 Got JSON-RPC error response 00:04:51.269 response: 00:04:51.269 { 00:04:51.269 "code": -19, 00:04:51.269 "message": "No such device" 00:04:51.269 } 00:04:51.269 18:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:51.269 18:59:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:51.269 18:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.269 18:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.269 [2024-07-15 18:59:31.669556] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:51.269 18:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.269 18:59:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:51.269 18:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.269 18:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.530 18:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.530 18:59:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:51.530 { 00:04:51.530 "subsystems": [ 00:04:51.530 { 00:04:51.530 "subsystem": "scheduler", 00:04:51.530 "config": [ 00:04:51.530 { 00:04:51.530 "method": "framework_set_scheduler", 00:04:51.530 "params": { 00:04:51.530 "name": "static" 00:04:51.530 } 00:04:51.530 } 00:04:51.530 ] 00:04:51.530 }, 00:04:51.530 { 00:04:51.530 "subsystem": "vmd", 00:04:51.530 "config": [] 00:04:51.530 }, 00:04:51.530 { 00:04:51.530 "subsystem": "sock", 00:04:51.530 "config": [ 00:04:51.530 { 00:04:51.530 "method": "sock_set_default_impl", 00:04:51.530 "params": { 00:04:51.530 "impl_name": "posix" 00:04:51.530 } 00:04:51.530 }, 00:04:51.530 { 00:04:51.530 "method": "sock_impl_set_options", 00:04:51.530 "params": { 00:04:51.530 "impl_name": "ssl", 00:04:51.530 "recv_buf_size": 4096, 00:04:51.530 "send_buf_size": 4096, 00:04:51.530 "enable_recv_pipe": true, 00:04:51.530 "enable_quickack": false, 00:04:51.530 "enable_placement_id": 0, 00:04:51.530 "enable_zerocopy_send_server": true, 00:04:51.530 "enable_zerocopy_send_client": false, 00:04:51.530 "zerocopy_threshold": 0, 00:04:51.530 "tls_version": 0, 00:04:51.530 "enable_ktls": false 00:04:51.530 } 00:04:51.530 }, 00:04:51.530 { 00:04:51.530 "method": "sock_impl_set_options", 00:04:51.530 "params": { 00:04:51.530 "impl_name": "posix", 00:04:51.530 "recv_buf_size": 2097152, 00:04:51.530 "send_buf_size": 2097152, 00:04:51.530 "enable_recv_pipe": true, 00:04:51.530 "enable_quickack": false, 00:04:51.530 "enable_placement_id": 0, 00:04:51.530 "enable_zerocopy_send_server": true, 00:04:51.530 "enable_zerocopy_send_client": false, 00:04:51.530 "zerocopy_threshold": 0, 00:04:51.530 "tls_version": 0, 00:04:51.530 "enable_ktls": false 00:04:51.530 } 00:04:51.530 } 00:04:51.530 ] 00:04:51.530 }, 00:04:51.530 { 00:04:51.530 "subsystem": "iobuf", 00:04:51.530 "config": [ 00:04:51.530 { 00:04:51.530 "method": "iobuf_set_options", 00:04:51.530 "params": { 00:04:51.530 "small_pool_count": 8192, 00:04:51.530 "large_pool_count": 1024, 00:04:51.530 "small_bufsize": 8192, 00:04:51.530 "large_bufsize": 135168 00:04:51.530 } 00:04:51.530 } 00:04:51.530 ] 00:04:51.530 }, 00:04:51.530 { 00:04:51.530 "subsystem": "keyring", 00:04:51.530 "config": [] 00:04:51.530 }, 00:04:51.530 { 00:04:51.530 "subsystem": "vfio_user_target", 00:04:51.530 "config": null 00:04:51.530 }, 00:04:51.530 { 00:04:51.530 "subsystem": "accel", 00:04:51.530 "config": [ 00:04:51.530 { 00:04:51.530 "method": "accel_set_options", 00:04:51.530 "params": { 00:04:51.530 "small_cache_size": 128, 00:04:51.530 "large_cache_size": 16, 00:04:51.530 "task_count": 2048, 00:04:51.530 "sequence_count": 2048, 00:04:51.530 "buf_count": 2048 00:04:51.530 } 00:04:51.530 } 00:04:51.530 ] 00:04:51.530 }, 00:04:51.530 { 00:04:51.530 "subsystem": "bdev", 00:04:51.530 "config": [ 00:04:51.530 { 00:04:51.530 "method": "bdev_set_options", 00:04:51.530 "params": { 00:04:51.530 "bdev_io_pool_size": 65535, 00:04:51.530 "bdev_io_cache_size": 256, 00:04:51.530 "bdev_auto_examine": true, 00:04:51.530 "iobuf_small_cache_size": 128, 00:04:51.530 "iobuf_large_cache_size": 16 00:04:51.530 } 00:04:51.530 }, 00:04:51.530 { 00:04:51.530 "method": "bdev_raid_set_options", 00:04:51.530 "params": { 00:04:51.530 "process_window_size_kb": 1024 00:04:51.530 } 00:04:51.530 }, 00:04:51.530 { 00:04:51.530 "method": "bdev_nvme_set_options", 00:04:51.530 "params": { 00:04:51.530 "action_on_timeout": "none", 00:04:51.530 "timeout_us": 0, 00:04:51.530 "timeout_admin_us": 0, 00:04:51.530 "keep_alive_timeout_ms": 10000, 00:04:51.530 "arbitration_burst": 0, 00:04:51.530 "low_priority_weight": 0, 00:04:51.530 "medium_priority_weight": 0, 00:04:51.530 "high_priority_weight": 0, 00:04:51.530 "nvme_adminq_poll_period_us": 10000, 00:04:51.530 "nvme_ioq_poll_period_us": 0, 00:04:51.530 "io_queue_requests": 0, 00:04:51.530 "delay_cmd_submit": true, 00:04:51.530 "transport_retry_count": 4, 00:04:51.530 "bdev_retry_count": 3, 00:04:51.530 "transport_ack_timeout": 0, 00:04:51.530 "ctrlr_loss_timeout_sec": 0, 00:04:51.530 "reconnect_delay_sec": 0, 00:04:51.530 "fast_io_fail_timeout_sec": 0, 00:04:51.530 "disable_auto_failback": false, 00:04:51.530 "generate_uuids": false, 00:04:51.530 "transport_tos": 0, 00:04:51.530 "nvme_error_stat": false, 00:04:51.530 "rdma_srq_size": 0, 00:04:51.530 "io_path_stat": false, 00:04:51.530 "allow_accel_sequence": false, 00:04:51.530 "rdma_max_cq_size": 0, 00:04:51.530 "rdma_cm_event_timeout_ms": 0, 00:04:51.530 "dhchap_digests": [ 00:04:51.530 "sha256", 00:04:51.530 "sha384", 00:04:51.530 "sha512" 00:04:51.530 ], 00:04:51.530 "dhchap_dhgroups": [ 00:04:51.530 "null", 00:04:51.530 "ffdhe2048", 00:04:51.530 "ffdhe3072", 00:04:51.530 "ffdhe4096", 00:04:51.530 "ffdhe6144", 00:04:51.530 "ffdhe8192" 00:04:51.530 ] 00:04:51.530 } 00:04:51.530 }, 00:04:51.530 { 00:04:51.530 "method": "bdev_nvme_set_hotplug", 00:04:51.530 "params": { 00:04:51.530 "period_us": 100000, 00:04:51.530 "enable": false 00:04:51.530 } 00:04:51.530 }, 00:04:51.530 { 00:04:51.530 "method": "bdev_iscsi_set_options", 00:04:51.530 "params": { 00:04:51.530 "timeout_sec": 30 00:04:51.530 } 00:04:51.530 }, 00:04:51.530 { 00:04:51.530 "method": "bdev_wait_for_examine" 00:04:51.530 } 00:04:51.530 ] 00:04:51.530 }, 00:04:51.530 { 00:04:51.530 "subsystem": "nvmf", 00:04:51.530 "config": [ 00:04:51.530 { 00:04:51.530 "method": "nvmf_set_config", 00:04:51.530 "params": { 00:04:51.530 "discovery_filter": "match_any", 00:04:51.530 "admin_cmd_passthru": { 00:04:51.530 "identify_ctrlr": false 00:04:51.530 } 00:04:51.530 } 00:04:51.530 }, 00:04:51.530 { 00:04:51.530 "method": "nvmf_set_max_subsystems", 00:04:51.530 "params": { 00:04:51.530 "max_subsystems": 1024 00:04:51.530 } 00:04:51.530 }, 00:04:51.530 { 00:04:51.530 "method": "nvmf_set_crdt", 00:04:51.530 "params": { 00:04:51.530 "crdt1": 0, 00:04:51.530 "crdt2": 0, 00:04:51.530 "crdt3": 0 00:04:51.530 } 00:04:51.530 }, 00:04:51.530 { 00:04:51.530 "method": "nvmf_create_transport", 00:04:51.530 "params": { 00:04:51.530 "trtype": "TCP", 00:04:51.530 "max_queue_depth": 128, 00:04:51.530 "max_io_qpairs_per_ctrlr": 127, 00:04:51.530 "in_capsule_data_size": 4096, 00:04:51.530 "max_io_size": 131072, 00:04:51.530 "io_unit_size": 131072, 00:04:51.530 "max_aq_depth": 128, 00:04:51.530 "num_shared_buffers": 511, 00:04:51.530 "buf_cache_size": 4294967295, 00:04:51.530 "dif_insert_or_strip": false, 00:04:51.530 "zcopy": false, 00:04:51.530 "c2h_success": true, 00:04:51.530 "sock_priority": 0, 00:04:51.530 "abort_timeout_sec": 1, 00:04:51.530 "ack_timeout": 0, 00:04:51.530 "data_wr_pool_size": 0 00:04:51.530 } 00:04:51.530 } 00:04:51.530 ] 00:04:51.530 }, 00:04:51.530 { 00:04:51.530 "subsystem": "nbd", 00:04:51.530 "config": [] 00:04:51.530 }, 00:04:51.530 { 00:04:51.530 "subsystem": "ublk", 00:04:51.531 "config": [] 00:04:51.531 }, 00:04:51.531 { 00:04:51.531 "subsystem": "vhost_blk", 00:04:51.531 "config": [] 00:04:51.531 }, 00:04:51.531 { 00:04:51.531 "subsystem": "scsi", 00:04:51.531 "config": null 00:04:51.531 }, 00:04:51.531 { 00:04:51.531 "subsystem": "iscsi", 00:04:51.531 "config": [ 00:04:51.531 { 00:04:51.531 "method": "iscsi_set_options", 00:04:51.531 "params": { 00:04:51.531 "node_base": "iqn.2016-06.io.spdk", 00:04:51.531 "max_sessions": 128, 00:04:51.531 "max_connections_per_session": 2, 00:04:51.531 "max_queue_depth": 64, 00:04:51.531 "default_time2wait": 2, 00:04:51.531 "default_time2retain": 20, 00:04:51.531 "first_burst_length": 8192, 00:04:51.531 "immediate_data": true, 00:04:51.531 "allow_duplicated_isid": false, 00:04:51.531 "error_recovery_level": 0, 00:04:51.531 "nop_timeout": 60, 00:04:51.531 "nop_in_interval": 30, 00:04:51.531 "disable_chap": false, 00:04:51.531 "require_chap": false, 00:04:51.531 "mutual_chap": false, 00:04:51.531 "chap_group": 0, 00:04:51.531 "max_large_datain_per_connection": 64, 00:04:51.531 "max_r2t_per_connection": 4, 00:04:51.531 "pdu_pool_size": 36864, 00:04:51.531 "immediate_data_pool_size": 16384, 00:04:51.531 "data_out_pool_size": 2048 00:04:51.531 } 00:04:51.531 } 00:04:51.531 ] 00:04:51.531 }, 00:04:51.531 { 00:04:51.531 "subsystem": "vhost_scsi", 00:04:51.531 "config": [] 00:04:51.531 } 00:04:51.531 ] 00:04:51.531 } 00:04:51.531 18:59:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:51.531 18:59:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 653057 00:04:51.531 18:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 653057 ']' 00:04:51.531 18:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 653057 00:04:51.531 18:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:51.531 18:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:51.531 18:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 653057 00:04:51.531 18:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:51.531 18:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:51.531 18:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 653057' 00:04:51.531 killing process with pid 653057 00:04:51.531 18:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 653057 00:04:51.531 18:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 653057 00:04:52.100 18:59:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=653275 00:04:52.100 18:59:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:52.100 18:59:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 653275 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 653275 ']' 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 653275 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 653275 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 653275' 00:04:57.389 killing process with pid 653275 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 653275 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 653275 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:57.389 00:04:57.389 real 0m6.838s 00:04:57.389 user 0m6.560s 00:04:57.389 sys 0m0.727s 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.389 ************************************ 00:04:57.389 END TEST skip_rpc_with_json 00:04:57.389 ************************************ 00:04:57.389 18:59:37 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:57.389 18:59:37 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:57.389 18:59:37 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.389 18:59:37 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.389 18:59:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.389 ************************************ 00:04:57.389 START TEST skip_rpc_with_delay 00:04:57.389 ************************************ 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.389 [2024-07-15 18:59:37.752177] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:57.389 [2024-07-15 18:59:37.752323] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:57.389 00:04:57.389 real 0m0.048s 00:04:57.389 user 0m0.024s 00:04:57.389 sys 0m0.024s 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.389 18:59:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:57.389 ************************************ 00:04:57.389 END TEST skip_rpc_with_delay 00:04:57.389 ************************************ 00:04:57.389 18:59:37 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:57.389 18:59:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:57.389 18:59:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:57.389 18:59:37 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:57.389 18:59:37 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.389 18:59:37 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.389 18:59:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.648 ************************************ 00:04:57.648 START TEST exit_on_failed_rpc_init 00:04:57.648 ************************************ 00:04:57.648 18:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:57.648 18:59:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=654061 00:04:57.648 18:59:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 654061 00:04:57.648 18:59:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.648 18:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 654061 ']' 00:04:57.648 18:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.648 18:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.648 18:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.648 18:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.648 18:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:57.648 [2024-07-15 18:59:37.889076] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:04:57.648 [2024-07-15 18:59:37.889166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid654061 ] 00:04:57.648 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.648 [2024-07-15 18:59:37.974201] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.648 [2024-07-15 18:59:38.062426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.584 18:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.584 18:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:58.584 18:59:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.584 18:59:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.584 18:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:58.584 18:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.584 18:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.584 18:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:58.584 18:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.584 18:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:58.584 18:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.584 18:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:58.584 18:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.584 18:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:58.585 18:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.585 [2024-07-15 18:59:38.753239] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:04:58.585 [2024-07-15 18:59:38.753326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid654084 ] 00:04:58.585 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.585 [2024-07-15 18:59:38.837161] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.585 [2024-07-15 18:59:38.918024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.585 [2024-07-15 18:59:38.918136] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:58.585 [2024-07-15 18:59:38.918150] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:58.585 [2024-07-15 18:59:38.918158] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:58.585 18:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:58.585 18:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:58.585 18:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:58.585 18:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:58.585 18:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:58.585 18:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:58.585 18:59:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:58.585 18:59:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 654061 00:04:58.585 18:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 654061 ']' 00:04:58.585 18:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 654061 00:04:58.585 18:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:58.585 18:59:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:58.585 18:59:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 654061 00:04:58.845 18:59:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:58.845 18:59:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:58.845 18:59:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 654061' 00:04:58.845 killing process with pid 654061 00:04:58.845 18:59:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 654061 00:04:58.845 18:59:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 654061 00:04:59.104 00:04:59.104 real 0m1.518s 00:04:59.104 user 0m1.693s 00:04:59.104 sys 0m0.483s 00:04:59.104 18:59:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.104 18:59:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:59.104 ************************************ 00:04:59.104 END TEST exit_on_failed_rpc_init 00:04:59.104 ************************************ 00:04:59.104 18:59:39 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:59.104 18:59:39 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:59.104 00:04:59.104 real 0m14.275s 00:04:59.104 user 0m13.595s 00:04:59.104 sys 0m1.856s 00:04:59.104 18:59:39 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.104 18:59:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.104 ************************************ 00:04:59.104 END TEST skip_rpc 00:04:59.104 ************************************ 00:04:59.104 18:59:39 -- common/autotest_common.sh@1142 -- # return 0 00:04:59.104 18:59:39 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:59.104 18:59:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.104 18:59:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.104 18:59:39 -- common/autotest_common.sh@10 -- # set +x 00:04:59.104 ************************************ 00:04:59.104 START TEST rpc_client 00:04:59.104 ************************************ 00:04:59.104 18:59:39 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:59.363 * Looking for test storage... 00:04:59.363 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:04:59.363 18:59:39 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:59.363 OK 00:04:59.363 18:59:39 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:59.363 00:04:59.363 real 0m0.135s 00:04:59.363 user 0m0.055s 00:04:59.363 sys 0m0.090s 00:04:59.363 18:59:39 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.363 18:59:39 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:59.363 ************************************ 00:04:59.363 END TEST rpc_client 00:04:59.363 ************************************ 00:04:59.363 18:59:39 -- common/autotest_common.sh@1142 -- # return 0 00:04:59.363 18:59:39 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:04:59.363 18:59:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.363 18:59:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.363 18:59:39 -- common/autotest_common.sh@10 -- # set +x 00:04:59.363 ************************************ 00:04:59.363 START TEST json_config 00:04:59.363 ************************************ 00:04:59.363 18:59:39 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:04:59.623 18:59:39 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:04:59.623 18:59:39 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:59.623 18:59:39 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.623 18:59:39 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.623 18:59:39 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.623 18:59:39 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.623 18:59:39 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.623 18:59:39 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.623 18:59:39 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.623 18:59:39 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.623 18:59:39 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.623 18:59:39 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.623 18:59:39 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:04:59.623 18:59:39 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:04:59.623 18:59:39 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.623 18:59:39 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.623 18:59:39 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:59.623 18:59:39 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:59.623 18:59:39 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:59.623 18:59:39 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.623 18:59:39 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.623 18:59:39 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.623 18:59:39 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.623 18:59:39 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.623 18:59:39 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.623 18:59:39 json_config -- paths/export.sh@5 -- # export PATH 00:04:59.623 18:59:39 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.623 18:59:39 json_config -- nvmf/common.sh@47 -- # : 0 00:04:59.623 18:59:39 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:59.623 18:59:39 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:59.623 18:59:39 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:59.623 18:59:39 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.623 18:59:39 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.623 18:59:39 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:59.623 18:59:39 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:59.623 18:59:39 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:59.623 18:59:39 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:04:59.623 18:59:39 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:59.623 18:59:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:59.623 18:59:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:59.623 18:59:39 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:59.623 18:59:39 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:59.623 WARNING: No tests are enabled so not running JSON configuration tests 00:04:59.623 18:59:39 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:59.623 00:04:59.623 real 0m0.111s 00:04:59.623 user 0m0.052s 00:04:59.623 sys 0m0.060s 00:04:59.623 18:59:39 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.623 18:59:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.623 ************************************ 00:04:59.623 END TEST json_config 00:04:59.623 ************************************ 00:04:59.623 18:59:39 -- common/autotest_common.sh@1142 -- # return 0 00:04:59.623 18:59:39 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:59.623 18:59:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.623 18:59:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.623 18:59:39 -- common/autotest_common.sh@10 -- # set +x 00:04:59.623 ************************************ 00:04:59.623 START TEST json_config_extra_key 00:04:59.623 ************************************ 00:04:59.623 18:59:39 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:59.623 18:59:40 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:04:59.623 18:59:40 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:59.623 18:59:40 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.623 18:59:40 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.623 18:59:40 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.623 18:59:40 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.623 18:59:40 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.623 18:59:40 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.623 18:59:40 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.623 18:59:40 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.623 18:59:40 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.623 18:59:40 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.623 18:59:40 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:04:59.623 18:59:40 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:04:59.623 18:59:40 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.623 18:59:40 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.623 18:59:40 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:59.623 18:59:40 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:59.623 18:59:40 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:59.623 18:59:40 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.623 18:59:40 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.623 18:59:40 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.623 18:59:40 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.623 18:59:40 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.623 18:59:40 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.623 18:59:40 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:59.623 18:59:40 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.623 18:59:40 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:59.623 18:59:40 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:59.623 18:59:40 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:59.623 18:59:40 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:59.623 18:59:40 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.623 18:59:40 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.623 18:59:40 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:59.623 18:59:40 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:59.623 18:59:40 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:59.623 18:59:40 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:04:59.623 18:59:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:59.623 18:59:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:59.623 18:59:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:59.623 18:59:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:59.623 18:59:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:59.623 18:59:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:59.623 18:59:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:59.623 18:59:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:59.623 18:59:40 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:59.623 18:59:40 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:59.624 INFO: launching applications... 00:04:59.624 18:59:40 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:04:59.624 18:59:40 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:59.624 18:59:40 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:59.624 18:59:40 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:59.624 18:59:40 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:59.624 18:59:40 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:59.624 18:59:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.624 18:59:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.624 18:59:40 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=654406 00:04:59.624 18:59:40 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:59.624 Waiting for target to run... 00:04:59.624 18:59:40 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 654406 /var/tmp/spdk_tgt.sock 00:04:59.624 18:59:40 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 654406 ']' 00:04:59.624 18:59:40 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:04:59.624 18:59:40 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:59.624 18:59:40 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.624 18:59:40 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:59.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:59.624 18:59:40 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.624 18:59:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:59.883 [2024-07-15 18:59:40.069830] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:04:59.883 [2024-07-15 18:59:40.069906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid654406 ] 00:04:59.883 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.451 [2024-07-15 18:59:40.593799] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.451 [2024-07-15 18:59:40.685281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.711 18:59:40 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.711 18:59:40 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:00.711 18:59:40 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:00.711 00:05:00.711 18:59:40 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:00.711 INFO: shutting down applications... 00:05:00.711 18:59:40 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:00.711 18:59:40 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:00.711 18:59:40 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:00.711 18:59:40 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 654406 ]] 00:05:00.711 18:59:40 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 654406 00:05:00.711 18:59:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:00.711 18:59:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.711 18:59:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 654406 00:05:00.711 18:59:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:01.281 18:59:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:01.281 18:59:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:01.281 18:59:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 654406 00:05:01.281 18:59:41 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:01.281 18:59:41 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:01.281 18:59:41 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:01.281 18:59:41 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:01.281 SPDK target shutdown done 00:05:01.281 18:59:41 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:01.281 Success 00:05:01.281 00:05:01.281 real 0m1.491s 00:05:01.281 user 0m1.053s 00:05:01.281 sys 0m0.637s 00:05:01.281 18:59:41 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.281 18:59:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:01.281 ************************************ 00:05:01.281 END TEST json_config_extra_key 00:05:01.281 ************************************ 00:05:01.281 18:59:41 -- common/autotest_common.sh@1142 -- # return 0 00:05:01.281 18:59:41 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:01.281 18:59:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.281 18:59:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.281 18:59:41 -- common/autotest_common.sh@10 -- # set +x 00:05:01.281 ************************************ 00:05:01.281 START TEST alias_rpc 00:05:01.281 ************************************ 00:05:01.281 18:59:41 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:01.281 * Looking for test storage... 00:05:01.281 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:05:01.281 18:59:41 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:01.281 18:59:41 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=654666 00:05:01.281 18:59:41 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 654666 00:05:01.281 18:59:41 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:01.281 18:59:41 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 654666 ']' 00:05:01.281 18:59:41 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.281 18:59:41 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.281 18:59:41 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.281 18:59:41 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.281 18:59:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.281 [2024-07-15 18:59:41.650361] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:01.281 [2024-07-15 18:59:41.650434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid654666 ] 00:05:01.281 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.540 [2024-07-15 18:59:41.734102] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.540 [2024-07-15 18:59:41.823777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.109 18:59:42 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.109 18:59:42 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:02.109 18:59:42 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:02.368 18:59:42 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 654666 00:05:02.368 18:59:42 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 654666 ']' 00:05:02.368 18:59:42 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 654666 00:05:02.368 18:59:42 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:02.368 18:59:42 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:02.368 18:59:42 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 654666 00:05:02.368 18:59:42 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:02.368 18:59:42 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:02.368 18:59:42 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 654666' 00:05:02.368 killing process with pid 654666 00:05:02.368 18:59:42 alias_rpc -- common/autotest_common.sh@967 -- # kill 654666 00:05:02.368 18:59:42 alias_rpc -- common/autotest_common.sh@972 -- # wait 654666 00:05:02.627 00:05:02.627 real 0m1.533s 00:05:02.627 user 0m1.588s 00:05:02.627 sys 0m0.493s 00:05:02.627 18:59:43 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.627 18:59:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.627 ************************************ 00:05:02.627 END TEST alias_rpc 00:05:02.627 ************************************ 00:05:02.905 18:59:43 -- common/autotest_common.sh@1142 -- # return 0 00:05:02.905 18:59:43 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:02.905 18:59:43 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:02.905 18:59:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.905 18:59:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.905 18:59:43 -- common/autotest_common.sh@10 -- # set +x 00:05:02.905 ************************************ 00:05:02.905 START TEST spdkcli_tcp 00:05:02.905 ************************************ 00:05:02.905 18:59:43 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:02.905 * Looking for test storage... 00:05:02.905 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:05:02.905 18:59:43 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:05:02.905 18:59:43 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:02.905 18:59:43 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:05:02.905 18:59:43 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:02.905 18:59:43 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:02.905 18:59:43 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:02.905 18:59:43 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:02.905 18:59:43 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:02.905 18:59:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:02.905 18:59:43 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=655016 00:05:02.905 18:59:43 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 655016 00:05:02.905 18:59:43 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:02.905 18:59:43 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 655016 ']' 00:05:02.905 18:59:43 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.905 18:59:43 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.905 18:59:43 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.906 18:59:43 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.906 18:59:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:02.906 [2024-07-15 18:59:43.269246] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:02.906 [2024-07-15 18:59:43.269344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid655016 ] 00:05:02.906 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.164 [2024-07-15 18:59:43.356132] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:03.164 [2024-07-15 18:59:43.447922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.164 [2024-07-15 18:59:43.447923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.732 18:59:44 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.732 18:59:44 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:03.732 18:59:44 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=655065 00:05:03.732 18:59:44 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:03.732 18:59:44 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:03.992 [ 00:05:03.992 "spdk_get_version", 00:05:03.992 "rpc_get_methods", 00:05:03.992 "trace_get_info", 00:05:03.992 "trace_get_tpoint_group_mask", 00:05:03.992 "trace_disable_tpoint_group", 00:05:03.992 "trace_enable_tpoint_group", 00:05:03.992 "trace_clear_tpoint_mask", 00:05:03.992 "trace_set_tpoint_mask", 00:05:03.992 "vfu_tgt_set_base_path", 00:05:03.992 "framework_get_pci_devices", 00:05:03.992 "framework_get_config", 00:05:03.992 "framework_get_subsystems", 00:05:03.992 "keyring_get_keys", 00:05:03.992 "iobuf_get_stats", 00:05:03.992 "iobuf_set_options", 00:05:03.992 "sock_get_default_impl", 00:05:03.992 "sock_set_default_impl", 00:05:03.992 "sock_impl_set_options", 00:05:03.992 "sock_impl_get_options", 00:05:03.992 "vmd_rescan", 00:05:03.992 "vmd_remove_device", 00:05:03.992 "vmd_enable", 00:05:03.992 "accel_get_stats", 00:05:03.992 "accel_set_options", 00:05:03.992 "accel_set_driver", 00:05:03.992 "accel_crypto_key_destroy", 00:05:03.992 "accel_crypto_keys_get", 00:05:03.992 "accel_crypto_key_create", 00:05:03.992 "accel_assign_opc", 00:05:03.992 "accel_get_module_info", 00:05:03.992 "accel_get_opc_assignments", 00:05:03.992 "notify_get_notifications", 00:05:03.992 "notify_get_types", 00:05:03.992 "bdev_get_histogram", 00:05:03.992 "bdev_enable_histogram", 00:05:03.992 "bdev_set_qos_limit", 00:05:03.992 "bdev_set_qd_sampling_period", 00:05:03.992 "bdev_get_bdevs", 00:05:03.992 "bdev_reset_iostat", 00:05:03.992 "bdev_get_iostat", 00:05:03.992 "bdev_examine", 00:05:03.992 "bdev_wait_for_examine", 00:05:03.992 "bdev_set_options", 00:05:03.992 "scsi_get_devices", 00:05:03.992 "thread_set_cpumask", 00:05:03.992 "framework_get_governor", 00:05:03.992 "framework_get_scheduler", 00:05:03.992 "framework_set_scheduler", 00:05:03.992 "framework_get_reactors", 00:05:03.992 "thread_get_io_channels", 00:05:03.992 "thread_get_pollers", 00:05:03.992 "thread_get_stats", 00:05:03.992 "framework_monitor_context_switch", 00:05:03.992 "spdk_kill_instance", 00:05:03.992 "log_enable_timestamps", 00:05:03.992 "log_get_flags", 00:05:03.992 "log_clear_flag", 00:05:03.992 "log_set_flag", 00:05:03.992 "log_get_level", 00:05:03.992 "log_set_level", 00:05:03.992 "log_get_print_level", 00:05:03.992 "log_set_print_level", 00:05:03.992 "framework_enable_cpumask_locks", 00:05:03.992 "framework_disable_cpumask_locks", 00:05:03.992 "framework_wait_init", 00:05:03.992 "framework_start_init", 00:05:03.992 "virtio_blk_create_transport", 00:05:03.992 "virtio_blk_get_transports", 00:05:03.992 "vhost_controller_set_coalescing", 00:05:03.992 "vhost_get_controllers", 00:05:03.992 "vhost_delete_controller", 00:05:03.992 "vhost_create_blk_controller", 00:05:03.992 "vhost_scsi_controller_remove_target", 00:05:03.992 "vhost_scsi_controller_add_target", 00:05:03.992 "vhost_start_scsi_controller", 00:05:03.992 "vhost_create_scsi_controller", 00:05:03.992 "ublk_recover_disk", 00:05:03.992 "ublk_get_disks", 00:05:03.992 "ublk_stop_disk", 00:05:03.992 "ublk_start_disk", 00:05:03.992 "ublk_destroy_target", 00:05:03.992 "ublk_create_target", 00:05:03.992 "nbd_get_disks", 00:05:03.992 "nbd_stop_disk", 00:05:03.992 "nbd_start_disk", 00:05:03.992 "env_dpdk_get_mem_stats", 00:05:03.992 "nvmf_stop_mdns_prr", 00:05:03.992 "nvmf_publish_mdns_prr", 00:05:03.992 "nvmf_subsystem_get_listeners", 00:05:03.992 "nvmf_subsystem_get_qpairs", 00:05:03.992 "nvmf_subsystem_get_controllers", 00:05:03.992 "nvmf_get_stats", 00:05:03.992 "nvmf_get_transports", 00:05:03.992 "nvmf_create_transport", 00:05:03.992 "nvmf_get_targets", 00:05:03.992 "nvmf_delete_target", 00:05:03.992 "nvmf_create_target", 00:05:03.992 "nvmf_subsystem_allow_any_host", 00:05:03.992 "nvmf_subsystem_remove_host", 00:05:03.992 "nvmf_subsystem_add_host", 00:05:03.992 "nvmf_ns_remove_host", 00:05:03.992 "nvmf_ns_add_host", 00:05:03.992 "nvmf_subsystem_remove_ns", 00:05:03.992 "nvmf_subsystem_add_ns", 00:05:03.992 "nvmf_subsystem_listener_set_ana_state", 00:05:03.992 "nvmf_discovery_get_referrals", 00:05:03.992 "nvmf_discovery_remove_referral", 00:05:03.992 "nvmf_discovery_add_referral", 00:05:03.992 "nvmf_subsystem_remove_listener", 00:05:03.992 "nvmf_subsystem_add_listener", 00:05:03.992 "nvmf_delete_subsystem", 00:05:03.992 "nvmf_create_subsystem", 00:05:03.992 "nvmf_get_subsystems", 00:05:03.992 "nvmf_set_crdt", 00:05:03.992 "nvmf_set_config", 00:05:03.992 "nvmf_set_max_subsystems", 00:05:03.992 "iscsi_get_histogram", 00:05:03.992 "iscsi_enable_histogram", 00:05:03.992 "iscsi_set_options", 00:05:03.992 "iscsi_get_auth_groups", 00:05:03.992 "iscsi_auth_group_remove_secret", 00:05:03.992 "iscsi_auth_group_add_secret", 00:05:03.992 "iscsi_delete_auth_group", 00:05:03.992 "iscsi_create_auth_group", 00:05:03.992 "iscsi_set_discovery_auth", 00:05:03.992 "iscsi_get_options", 00:05:03.992 "iscsi_target_node_request_logout", 00:05:03.992 "iscsi_target_node_set_redirect", 00:05:03.992 "iscsi_target_node_set_auth", 00:05:03.992 "iscsi_target_node_add_lun", 00:05:03.992 "iscsi_get_stats", 00:05:03.992 "iscsi_get_connections", 00:05:03.992 "iscsi_portal_group_set_auth", 00:05:03.992 "iscsi_start_portal_group", 00:05:03.992 "iscsi_delete_portal_group", 00:05:03.992 "iscsi_create_portal_group", 00:05:03.992 "iscsi_get_portal_groups", 00:05:03.992 "iscsi_delete_target_node", 00:05:03.992 "iscsi_target_node_remove_pg_ig_maps", 00:05:03.992 "iscsi_target_node_add_pg_ig_maps", 00:05:03.992 "iscsi_create_target_node", 00:05:03.992 "iscsi_get_target_nodes", 00:05:03.992 "iscsi_delete_initiator_group", 00:05:03.992 "iscsi_initiator_group_remove_initiators", 00:05:03.992 "iscsi_initiator_group_add_initiators", 00:05:03.992 "iscsi_create_initiator_group", 00:05:03.992 "iscsi_get_initiator_groups", 00:05:03.992 "keyring_linux_set_options", 00:05:03.992 "keyring_file_remove_key", 00:05:03.992 "keyring_file_add_key", 00:05:03.992 "vfu_virtio_create_scsi_endpoint", 00:05:03.992 "vfu_virtio_scsi_remove_target", 00:05:03.992 "vfu_virtio_scsi_add_target", 00:05:03.992 "vfu_virtio_create_blk_endpoint", 00:05:03.992 "vfu_virtio_delete_endpoint", 00:05:03.992 "iaa_scan_accel_module", 00:05:03.992 "dsa_scan_accel_module", 00:05:03.992 "ioat_scan_accel_module", 00:05:03.992 "accel_error_inject_error", 00:05:03.992 "bdev_iscsi_delete", 00:05:03.992 "bdev_iscsi_create", 00:05:03.992 "bdev_iscsi_set_options", 00:05:03.992 "bdev_virtio_attach_controller", 00:05:03.992 "bdev_virtio_scsi_get_devices", 00:05:03.992 "bdev_virtio_detach_controller", 00:05:03.992 "bdev_virtio_blk_set_hotplug", 00:05:03.992 "bdev_ftl_set_property", 00:05:03.992 "bdev_ftl_get_properties", 00:05:03.992 "bdev_ftl_get_stats", 00:05:03.992 "bdev_ftl_unmap", 00:05:03.992 "bdev_ftl_unload", 00:05:03.992 "bdev_ftl_delete", 00:05:03.992 "bdev_ftl_load", 00:05:03.992 "bdev_ftl_create", 00:05:03.992 "bdev_aio_delete", 00:05:03.992 "bdev_aio_rescan", 00:05:03.992 "bdev_aio_create", 00:05:03.992 "blobfs_create", 00:05:03.992 "blobfs_detect", 00:05:03.992 "blobfs_set_cache_size", 00:05:03.992 "bdev_zone_block_delete", 00:05:03.992 "bdev_zone_block_create", 00:05:03.992 "bdev_delay_delete", 00:05:03.992 "bdev_delay_create", 00:05:03.992 "bdev_delay_update_latency", 00:05:03.992 "bdev_split_delete", 00:05:03.992 "bdev_split_create", 00:05:03.992 "bdev_error_inject_error", 00:05:03.992 "bdev_error_delete", 00:05:03.992 "bdev_error_create", 00:05:03.992 "bdev_raid_set_options", 00:05:03.992 "bdev_raid_remove_base_bdev", 00:05:03.992 "bdev_raid_add_base_bdev", 00:05:03.992 "bdev_raid_delete", 00:05:03.992 "bdev_raid_create", 00:05:03.992 "bdev_raid_get_bdevs", 00:05:03.992 "bdev_lvol_set_parent_bdev", 00:05:03.992 "bdev_lvol_set_parent", 00:05:03.992 "bdev_lvol_check_shallow_copy", 00:05:03.992 "bdev_lvol_start_shallow_copy", 00:05:03.992 "bdev_lvol_grow_lvstore", 00:05:03.992 "bdev_lvol_get_lvols", 00:05:03.992 "bdev_lvol_get_lvstores", 00:05:03.992 "bdev_lvol_delete", 00:05:03.992 "bdev_lvol_set_read_only", 00:05:03.992 "bdev_lvol_resize", 00:05:03.992 "bdev_lvol_decouple_parent", 00:05:03.992 "bdev_lvol_inflate", 00:05:03.992 "bdev_lvol_rename", 00:05:03.992 "bdev_lvol_clone_bdev", 00:05:03.992 "bdev_lvol_clone", 00:05:03.992 "bdev_lvol_snapshot", 00:05:03.992 "bdev_lvol_create", 00:05:03.992 "bdev_lvol_delete_lvstore", 00:05:03.992 "bdev_lvol_rename_lvstore", 00:05:03.992 "bdev_lvol_create_lvstore", 00:05:03.992 "bdev_passthru_delete", 00:05:03.992 "bdev_passthru_create", 00:05:03.992 "bdev_nvme_cuse_unregister", 00:05:03.992 "bdev_nvme_cuse_register", 00:05:03.992 "bdev_opal_new_user", 00:05:03.992 "bdev_opal_set_lock_state", 00:05:03.992 "bdev_opal_delete", 00:05:03.992 "bdev_opal_get_info", 00:05:03.992 "bdev_opal_create", 00:05:03.992 "bdev_nvme_opal_revert", 00:05:03.992 "bdev_nvme_opal_init", 00:05:03.992 "bdev_nvme_send_cmd", 00:05:03.992 "bdev_nvme_get_path_iostat", 00:05:03.993 "bdev_nvme_get_mdns_discovery_info", 00:05:03.993 "bdev_nvme_stop_mdns_discovery", 00:05:03.993 "bdev_nvme_start_mdns_discovery", 00:05:03.993 "bdev_nvme_set_multipath_policy", 00:05:03.993 "bdev_nvme_set_preferred_path", 00:05:03.993 "bdev_nvme_get_io_paths", 00:05:03.993 "bdev_nvme_remove_error_injection", 00:05:03.993 "bdev_nvme_add_error_injection", 00:05:03.993 "bdev_nvme_get_discovery_info", 00:05:03.993 "bdev_nvme_stop_discovery", 00:05:03.993 "bdev_nvme_start_discovery", 00:05:03.993 "bdev_nvme_get_controller_health_info", 00:05:03.993 "bdev_nvme_disable_controller", 00:05:03.993 "bdev_nvme_enable_controller", 00:05:03.993 "bdev_nvme_reset_controller", 00:05:03.993 "bdev_nvme_get_transport_statistics", 00:05:03.993 "bdev_nvme_apply_firmware", 00:05:03.993 "bdev_nvme_detach_controller", 00:05:03.993 "bdev_nvme_get_controllers", 00:05:03.993 "bdev_nvme_attach_controller", 00:05:03.993 "bdev_nvme_set_hotplug", 00:05:03.993 "bdev_nvme_set_options", 00:05:03.993 "bdev_null_resize", 00:05:03.993 "bdev_null_delete", 00:05:03.993 "bdev_null_create", 00:05:03.993 "bdev_malloc_delete", 00:05:03.993 "bdev_malloc_create" 00:05:03.993 ] 00:05:03.993 18:59:44 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:03.993 18:59:44 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:03.993 18:59:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:03.993 18:59:44 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:03.993 18:59:44 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 655016 00:05:03.993 18:59:44 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 655016 ']' 00:05:03.993 18:59:44 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 655016 00:05:03.993 18:59:44 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:03.993 18:59:44 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:03.993 18:59:44 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 655016 00:05:03.993 18:59:44 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:03.993 18:59:44 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:03.993 18:59:44 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 655016' 00:05:03.993 killing process with pid 655016 00:05:03.993 18:59:44 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 655016 00:05:03.993 18:59:44 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 655016 00:05:04.559 00:05:04.559 real 0m1.595s 00:05:04.559 user 0m2.864s 00:05:04.559 sys 0m0.547s 00:05:04.559 18:59:44 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.559 18:59:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:04.559 ************************************ 00:05:04.559 END TEST spdkcli_tcp 00:05:04.559 ************************************ 00:05:04.559 18:59:44 -- common/autotest_common.sh@1142 -- # return 0 00:05:04.559 18:59:44 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:04.559 18:59:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.559 18:59:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.559 18:59:44 -- common/autotest_common.sh@10 -- # set +x 00:05:04.559 ************************************ 00:05:04.559 START TEST dpdk_mem_utility 00:05:04.559 ************************************ 00:05:04.559 18:59:44 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:04.559 * Looking for test storage... 00:05:04.559 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:05:04.559 18:59:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:04.559 18:59:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=655301 00:05:04.559 18:59:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 655301 00:05:04.559 18:59:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.559 18:59:44 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 655301 ']' 00:05:04.559 18:59:44 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.559 18:59:44 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.559 18:59:44 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.559 18:59:44 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.559 18:59:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:04.559 [2024-07-15 18:59:44.945127] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:04.559 [2024-07-15 18:59:44.945205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid655301 ] 00:05:04.559 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.818 [2024-07-15 18:59:45.029810] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.818 [2024-07-15 18:59:45.110465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.386 18:59:45 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:05.386 18:59:45 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:05.386 18:59:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:05.386 18:59:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:05.386 18:59:45 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.386 18:59:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:05.386 { 00:05:05.386 "filename": "/tmp/spdk_mem_dump.txt" 00:05:05.386 } 00:05:05.386 18:59:45 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.386 18:59:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:05.645 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:05.645 1 heaps totaling size 814.000000 MiB 00:05:05.645 size: 814.000000 MiB heap id: 0 00:05:05.645 end heaps---------- 00:05:05.645 8 mempools totaling size 598.116089 MiB 00:05:05.645 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:05.645 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:05.645 size: 84.521057 MiB name: bdev_io_655301 00:05:05.645 size: 51.011292 MiB name: evtpool_655301 00:05:05.645 size: 50.003479 MiB name: msgpool_655301 00:05:05.645 size: 21.763794 MiB name: PDU_Pool 00:05:05.645 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:05.645 size: 0.026123 MiB name: Session_Pool 00:05:05.645 end mempools------- 00:05:05.645 6 memzones totaling size 4.142822 MiB 00:05:05.645 size: 1.000366 MiB name: RG_ring_0_655301 00:05:05.645 size: 1.000366 MiB name: RG_ring_1_655301 00:05:05.645 size: 1.000366 MiB name: RG_ring_4_655301 00:05:05.645 size: 1.000366 MiB name: RG_ring_5_655301 00:05:05.645 size: 0.125366 MiB name: RG_ring_2_655301 00:05:05.645 size: 0.015991 MiB name: RG_ring_3_655301 00:05:05.645 end memzones------- 00:05:05.645 18:59:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:05.645 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:05.645 list of free elements. size: 12.519348 MiB 00:05:05.645 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:05.645 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:05.645 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:05.645 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:05.645 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:05.645 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:05.645 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:05.645 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:05.645 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:05.645 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:05.645 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:05.645 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:05.645 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:05.645 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:05.645 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:05.645 list of standard malloc elements. size: 199.218079 MiB 00:05:05.645 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:05.645 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:05.645 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:05.645 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:05.645 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:05.645 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:05.645 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:05.645 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:05.645 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:05.645 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:05.645 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:05.645 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:05.645 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:05.645 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:05.645 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:05.645 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:05.645 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:05.645 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:05.645 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:05.645 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:05.645 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:05.645 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:05.645 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:05.645 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:05.645 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:05.645 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:05.645 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:05.645 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:05.645 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:05.645 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:05.645 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:05.645 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:05.645 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:05.645 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:05.645 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:05.645 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:05.645 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:05.645 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:05.645 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:05.645 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:05.645 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:05.645 list of memzone associated elements. size: 602.262573 MiB 00:05:05.645 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:05.645 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:05.645 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:05.645 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:05.645 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:05.645 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_655301_0 00:05:05.645 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:05.645 associated memzone info: size: 48.002930 MiB name: MP_evtpool_655301_0 00:05:05.645 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:05.645 associated memzone info: size: 48.002930 MiB name: MP_msgpool_655301_0 00:05:05.645 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:05.645 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:05.645 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:05.645 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:05.645 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:05.645 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_655301 00:05:05.645 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:05.646 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_655301 00:05:05.646 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:05.646 associated memzone info: size: 1.007996 MiB name: MP_evtpool_655301 00:05:05.646 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:05.646 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:05.646 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:05.646 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:05.646 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:05.646 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:05.646 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:05.646 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:05.646 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:05.646 associated memzone info: size: 1.000366 MiB name: RG_ring_0_655301 00:05:05.646 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:05.646 associated memzone info: size: 1.000366 MiB name: RG_ring_1_655301 00:05:05.646 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:05.646 associated memzone info: size: 1.000366 MiB name: RG_ring_4_655301 00:05:05.646 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:05.646 associated memzone info: size: 1.000366 MiB name: RG_ring_5_655301 00:05:05.646 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:05.646 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_655301 00:05:05.646 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:05.646 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:05.646 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:05.646 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:05.646 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:05.646 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:05.646 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:05.646 associated memzone info: size: 0.125366 MiB name: RG_ring_2_655301 00:05:05.646 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:05.646 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:05.646 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:05.646 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:05.646 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:05.646 associated memzone info: size: 0.015991 MiB name: RG_ring_3_655301 00:05:05.646 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:05.646 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:05.646 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:05.646 associated memzone info: size: 0.000183 MiB name: MP_msgpool_655301 00:05:05.646 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:05.646 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_655301 00:05:05.646 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:05.646 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:05.646 18:59:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:05.646 18:59:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 655301 00:05:05.646 18:59:45 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 655301 ']' 00:05:05.646 18:59:45 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 655301 00:05:05.646 18:59:45 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:05.646 18:59:45 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:05.646 18:59:45 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 655301 00:05:05.646 18:59:45 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:05.646 18:59:45 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:05.646 18:59:45 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 655301' 00:05:05.646 killing process with pid 655301 00:05:05.646 18:59:45 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 655301 00:05:05.646 18:59:45 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 655301 00:05:05.905 00:05:05.905 real 0m1.471s 00:05:05.905 user 0m1.499s 00:05:05.905 sys 0m0.475s 00:05:05.905 18:59:46 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.905 18:59:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:05.905 ************************************ 00:05:05.905 END TEST dpdk_mem_utility 00:05:05.905 ************************************ 00:05:05.905 18:59:46 -- common/autotest_common.sh@1142 -- # return 0 00:05:05.905 18:59:46 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:05.905 18:59:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.905 18:59:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.905 18:59:46 -- common/autotest_common.sh@10 -- # set +x 00:05:06.164 ************************************ 00:05:06.164 START TEST event 00:05:06.164 ************************************ 00:05:06.164 18:59:46 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:06.164 * Looking for test storage... 00:05:06.164 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:06.164 18:59:46 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:06.164 18:59:46 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:06.164 18:59:46 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:06.164 18:59:46 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:06.164 18:59:46 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.164 18:59:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.164 ************************************ 00:05:06.164 START TEST event_perf 00:05:06.164 ************************************ 00:05:06.164 18:59:46 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:06.164 Running I/O for 1 seconds...[2024-07-15 18:59:46.536616] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:06.164 [2024-07-15 18:59:46.536706] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid655545 ] 00:05:06.164 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.422 [2024-07-15 18:59:46.625361] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:06.423 [2024-07-15 18:59:46.716901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.423 [2024-07-15 18:59:46.717000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.423 [2024-07-15 18:59:46.717121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.423 [2024-07-15 18:59:46.717122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:07.357 Running I/O for 1 seconds... 00:05:07.357 lcore 0: 186595 00:05:07.357 lcore 1: 186595 00:05:07.357 lcore 2: 186595 00:05:07.357 lcore 3: 186596 00:05:07.616 done. 00:05:07.616 00:05:07.616 real 0m1.274s 00:05:07.616 user 0m4.158s 00:05:07.616 sys 0m0.111s 00:05:07.616 18:59:47 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.616 18:59:47 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:07.616 ************************************ 00:05:07.616 END TEST event_perf 00:05:07.616 ************************************ 00:05:07.616 18:59:47 event -- common/autotest_common.sh@1142 -- # return 0 00:05:07.616 18:59:47 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:07.616 18:59:47 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:07.616 18:59:47 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.616 18:59:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.616 ************************************ 00:05:07.616 START TEST event_reactor 00:05:07.616 ************************************ 00:05:07.616 18:59:47 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:07.616 [2024-07-15 18:59:47.896780] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:07.616 [2024-07-15 18:59:47.896867] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid655754 ] 00:05:07.616 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.616 [2024-07-15 18:59:47.986430] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.874 [2024-07-15 18:59:48.075838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.811 test_start 00:05:08.811 oneshot 00:05:08.811 tick 100 00:05:08.811 tick 100 00:05:08.811 tick 250 00:05:08.811 tick 100 00:05:08.811 tick 100 00:05:08.811 tick 100 00:05:08.811 tick 250 00:05:08.811 tick 500 00:05:08.811 tick 100 00:05:08.811 tick 100 00:05:08.811 tick 250 00:05:08.811 tick 100 00:05:08.811 tick 100 00:05:08.811 test_end 00:05:08.811 00:05:08.811 real 0m1.270s 00:05:08.811 user 0m1.153s 00:05:08.811 sys 0m0.113s 00:05:08.811 18:59:49 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.811 18:59:49 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:08.811 ************************************ 00:05:08.811 END TEST event_reactor 00:05:08.811 ************************************ 00:05:08.811 18:59:49 event -- common/autotest_common.sh@1142 -- # return 0 00:05:08.811 18:59:49 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:08.811 18:59:49 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:08.811 18:59:49 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.811 18:59:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.811 ************************************ 00:05:08.811 START TEST event_reactor_perf 00:05:08.811 ************************************ 00:05:08.811 18:59:49 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:09.070 [2024-07-15 18:59:49.251904] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:09.070 [2024-07-15 18:59:49.251992] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid655952 ] 00:05:09.070 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.070 [2024-07-15 18:59:49.339052] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.070 [2024-07-15 18:59:49.427246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.447 test_start 00:05:10.447 test_end 00:05:10.447 Performance: 935442 events per second 00:05:10.447 00:05:10.447 real 0m1.268s 00:05:10.447 user 0m1.157s 00:05:10.447 sys 0m0.106s 00:05:10.447 18:59:50 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.447 18:59:50 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:10.447 ************************************ 00:05:10.447 END TEST event_reactor_perf 00:05:10.447 ************************************ 00:05:10.447 18:59:50 event -- common/autotest_common.sh@1142 -- # return 0 00:05:10.447 18:59:50 event -- event/event.sh@49 -- # uname -s 00:05:10.447 18:59:50 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:10.447 18:59:50 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:10.447 18:59:50 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.447 18:59:50 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.447 18:59:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.447 ************************************ 00:05:10.447 START TEST event_scheduler 00:05:10.447 ************************************ 00:05:10.447 18:59:50 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:10.447 * Looking for test storage... 00:05:10.447 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:05:10.447 18:59:50 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:10.447 18:59:50 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=656187 00:05:10.447 18:59:50 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.447 18:59:50 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:10.447 18:59:50 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 656187 00:05:10.447 18:59:50 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 656187 ']' 00:05:10.447 18:59:50 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.447 18:59:50 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.447 18:59:50 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.447 18:59:50 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.447 18:59:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.447 [2024-07-15 18:59:50.729880] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:10.447 [2024-07-15 18:59:50.729952] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid656187 ] 00:05:10.447 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.447 [2024-07-15 18:59:50.813460] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:10.710 [2024-07-15 18:59:50.899020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.710 [2024-07-15 18:59:50.899120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.710 [2024-07-15 18:59:50.899227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:10.710 [2024-07-15 18:59:50.899241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:11.324 18:59:51 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.324 18:59:51 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:11.324 18:59:51 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:11.324 18:59:51 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.324 18:59:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:11.324 [2024-07-15 18:59:51.589840] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:11.324 [2024-07-15 18:59:51.589863] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:11.324 [2024-07-15 18:59:51.589877] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:11.324 [2024-07-15 18:59:51.589885] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:11.324 [2024-07-15 18:59:51.589892] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:11.324 18:59:51 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.324 18:59:51 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:11.324 18:59:51 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.324 18:59:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:11.324 [2024-07-15 18:59:51.665716] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:11.324 18:59:51 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.324 18:59:51 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:11.324 18:59:51 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.324 18:59:51 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.324 18:59:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:11.324 ************************************ 00:05:11.324 START TEST scheduler_create_thread 00:05:11.324 ************************************ 00:05:11.324 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:11.324 18:59:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:11.324 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.324 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.324 2 00:05:11.324 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.324 18:59:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:11.324 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.324 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.324 3 00:05:11.324 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.324 18:59:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:11.324 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.324 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.324 4 00:05:11.324 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.583 5 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.583 6 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.583 7 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.583 8 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.583 9 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.583 10 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.583 18:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.521 18:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.521 18:59:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:12.521 18:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.521 18:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.899 18:59:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.899 18:59:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:13.899 18:59:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:13.899 18:59:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.899 18:59:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.834 18:59:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.834 00:05:14.834 real 0m3.384s 00:05:14.834 user 0m0.026s 00:05:14.834 sys 0m0.007s 00:05:14.834 18:59:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.834 18:59:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.834 ************************************ 00:05:14.834 END TEST scheduler_create_thread 00:05:14.834 ************************************ 00:05:14.834 18:59:55 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:14.834 18:59:55 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:14.834 18:59:55 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 656187 00:05:14.834 18:59:55 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 656187 ']' 00:05:14.834 18:59:55 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 656187 00:05:14.834 18:59:55 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:14.834 18:59:55 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:14.834 18:59:55 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 656187 00:05:14.834 18:59:55 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:14.834 18:59:55 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:14.834 18:59:55 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 656187' 00:05:14.834 killing process with pid 656187 00:05:14.834 18:59:55 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 656187 00:05:14.834 18:59:55 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 656187 00:05:15.092 [2024-07-15 18:59:55.473701] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:15.350 00:05:15.350 real 0m5.111s 00:05:15.350 user 0m10.501s 00:05:15.350 sys 0m0.454s 00:05:15.350 18:59:55 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.350 18:59:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:15.350 ************************************ 00:05:15.350 END TEST event_scheduler 00:05:15.350 ************************************ 00:05:15.350 18:59:55 event -- common/autotest_common.sh@1142 -- # return 0 00:05:15.350 18:59:55 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:15.350 18:59:55 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:15.350 18:59:55 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.350 18:59:55 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.350 18:59:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.609 ************************************ 00:05:15.609 START TEST app_repeat 00:05:15.609 ************************************ 00:05:15.609 18:59:55 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:15.609 18:59:55 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.609 18:59:55 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.609 18:59:55 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:15.609 18:59:55 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.609 18:59:55 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:15.609 18:59:55 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:15.609 18:59:55 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:15.609 18:59:55 event.app_repeat -- event/event.sh@19 -- # repeat_pid=656969 00:05:15.609 18:59:55 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.609 18:59:55 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:15.609 18:59:55 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 656969' 00:05:15.609 Process app_repeat pid: 656969 00:05:15.609 18:59:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:15.609 18:59:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:15.609 spdk_app_start Round 0 00:05:15.609 18:59:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 656969 /var/tmp/spdk-nbd.sock 00:05:15.609 18:59:55 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 656969 ']' 00:05:15.609 18:59:55 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:15.609 18:59:55 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.609 18:59:55 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:15.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:15.609 18:59:55 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.609 18:59:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:15.609 [2024-07-15 18:59:55.826588] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:15.609 [2024-07-15 18:59:55.826678] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid656969 ] 00:05:15.609 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.609 [2024-07-15 18:59:55.915610] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.609 [2024-07-15 18:59:56.006071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.609 [2024-07-15 18:59:56.006072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.599 18:59:56 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.599 18:59:56 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:16.599 18:59:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.599 Malloc0 00:05:16.599 18:59:56 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.857 Malloc1 00:05:16.857 18:59:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.857 18:59:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.857 18:59:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.857 18:59:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:16.857 18:59:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.857 18:59:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:16.857 18:59:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.857 18:59:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.857 18:59:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.857 18:59:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:16.857 18:59:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.857 18:59:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:16.857 18:59:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:16.857 18:59:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:16.857 18:59:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.857 18:59:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:16.857 /dev/nbd0 00:05:16.857 18:59:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:16.857 18:59:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:16.857 18:59:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:16.857 18:59:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:16.857 18:59:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:16.857 18:59:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:16.857 18:59:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:16.857 18:59:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:16.857 18:59:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:16.857 18:59:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:16.857 18:59:57 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.857 1+0 records in 00:05:16.857 1+0 records out 00:05:16.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023291 s, 17.6 MB/s 00:05:16.857 18:59:57 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:16.857 18:59:57 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:16.857 18:59:57 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:16.857 18:59:57 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:16.857 18:59:57 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:16.857 18:59:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.115 18:59:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.115 18:59:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:17.115 /dev/nbd1 00:05:17.115 18:59:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.115 18:59:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.115 18:59:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:17.115 18:59:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:17.115 18:59:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:17.115 18:59:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:17.115 18:59:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:17.115 18:59:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:17.115 18:59:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:17.115 18:59:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:17.115 18:59:57 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.115 1+0 records in 00:05:17.115 1+0 records out 00:05:17.115 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258298 s, 15.9 MB/s 00:05:17.115 18:59:57 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:17.115 18:59:57 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:17.115 18:59:57 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:17.115 18:59:57 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:17.115 18:59:57 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:17.115 18:59:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.115 18:59:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.115 18:59:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.115 18:59:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.115 18:59:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.374 18:59:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:17.374 { 00:05:17.374 "nbd_device": "/dev/nbd0", 00:05:17.374 "bdev_name": "Malloc0" 00:05:17.374 }, 00:05:17.374 { 00:05:17.374 "nbd_device": "/dev/nbd1", 00:05:17.374 "bdev_name": "Malloc1" 00:05:17.374 } 00:05:17.374 ]' 00:05:17.374 18:59:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:17.374 { 00:05:17.374 "nbd_device": "/dev/nbd0", 00:05:17.374 "bdev_name": "Malloc0" 00:05:17.374 }, 00:05:17.374 { 00:05:17.374 "nbd_device": "/dev/nbd1", 00:05:17.374 "bdev_name": "Malloc1" 00:05:17.374 } 00:05:17.374 ]' 00:05:17.374 18:59:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.374 18:59:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:17.374 /dev/nbd1' 00:05:17.374 18:59:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:17.374 /dev/nbd1' 00:05:17.374 18:59:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.374 18:59:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:17.374 18:59:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:17.374 18:59:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:17.374 18:59:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:17.374 18:59:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:17.374 18:59:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.374 18:59:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.374 18:59:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:17.374 18:59:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.374 18:59:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:17.374 18:59:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:17.374 256+0 records in 00:05:17.374 256+0 records out 00:05:17.374 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104489 s, 100 MB/s 00:05:17.375 18:59:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.375 18:59:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:17.375 256+0 records in 00:05:17.375 256+0 records out 00:05:17.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207371 s, 50.6 MB/s 00:05:17.375 18:59:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.375 18:59:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:17.634 256+0 records in 00:05:17.634 256+0 records out 00:05:17.634 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022589 s, 46.4 MB/s 00:05:17.634 18:59:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:17.634 18:59:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.634 18:59:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.634 18:59:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:17.634 18:59:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.634 18:59:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:17.634 18:59:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:17.634 18:59:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.634 18:59:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:17.634 18:59:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.634 18:59:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:17.634 18:59:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.634 18:59:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:17.634 18:59:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.634 18:59:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.634 18:59:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:17.634 18:59:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:17.634 18:59:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.634 18:59:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:17.634 18:59:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:17.634 18:59:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:17.634 18:59:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:17.634 18:59:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.634 18:59:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.634 18:59:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:17.634 18:59:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:17.634 18:59:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.634 18:59:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.634 18:59:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:17.892 18:59:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:17.892 18:59:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:17.892 18:59:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:17.892 18:59:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.892 18:59:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.892 18:59:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:17.892 18:59:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:17.892 18:59:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.892 18:59:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.892 18:59:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.892 18:59:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.150 18:59:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:18.150 18:59:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:18.150 18:59:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.150 18:59:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:18.150 18:59:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:18.150 18:59:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.150 18:59:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:18.150 18:59:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:18.150 18:59:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:18.150 18:59:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:18.150 18:59:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:18.150 18:59:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:18.150 18:59:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:18.409 18:59:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:18.668 [2024-07-15 18:59:58.884047] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.668 [2024-07-15 18:59:58.963144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.668 [2024-07-15 18:59:58.963144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.668 [2024-07-15 18:59:59.005198] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:18.668 [2024-07-15 18:59:59.005245] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:21.954 19:00:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:21.954 19:00:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:21.954 spdk_app_start Round 1 00:05:21.954 19:00:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 656969 /var/tmp/spdk-nbd.sock 00:05:21.954 19:00:01 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 656969 ']' 00:05:21.954 19:00:01 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:21.954 19:00:01 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.955 19:00:01 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:21.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:21.955 19:00:01 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.955 19:00:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:21.955 19:00:01 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.955 19:00:01 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:21.955 19:00:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.955 Malloc0 00:05:21.955 19:00:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.955 Malloc1 00:05:21.955 19:00:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.955 19:00:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.955 19:00:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.955 19:00:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:21.955 19:00:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.955 19:00:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:21.955 19:00:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.955 19:00:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.955 19:00:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.955 19:00:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:21.955 19:00:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.955 19:00:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:21.955 19:00:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:21.955 19:00:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:21.955 19:00:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.955 19:00:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:22.216 /dev/nbd0 00:05:22.216 19:00:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:22.216 19:00:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:22.216 19:00:02 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:22.216 19:00:02 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:22.216 19:00:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:22.216 19:00:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:22.216 19:00:02 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:22.216 19:00:02 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:22.216 19:00:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:22.216 19:00:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:22.216 19:00:02 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.216 1+0 records in 00:05:22.216 1+0 records out 00:05:22.216 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235844 s, 17.4 MB/s 00:05:22.216 19:00:02 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:22.216 19:00:02 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:22.216 19:00:02 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:22.216 19:00:02 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:22.216 19:00:02 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:22.216 19:00:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.216 19:00:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.216 19:00:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:22.476 /dev/nbd1 00:05:22.476 19:00:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:22.476 19:00:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:22.476 19:00:02 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:22.476 19:00:02 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:22.476 19:00:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:22.476 19:00:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:22.476 19:00:02 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:22.476 19:00:02 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:22.476 19:00:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:22.476 19:00:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:22.476 19:00:02 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.476 1+0 records in 00:05:22.476 1+0 records out 00:05:22.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272455 s, 15.0 MB/s 00:05:22.476 19:00:02 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:22.476 19:00:02 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:22.476 19:00:02 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:22.476 19:00:02 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:22.476 19:00:02 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:22.476 19:00:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.476 19:00:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.476 19:00:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.476 19:00:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.476 19:00:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.736 19:00:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:22.736 { 00:05:22.736 "nbd_device": "/dev/nbd0", 00:05:22.736 "bdev_name": "Malloc0" 00:05:22.736 }, 00:05:22.736 { 00:05:22.736 "nbd_device": "/dev/nbd1", 00:05:22.736 "bdev_name": "Malloc1" 00:05:22.736 } 00:05:22.736 ]' 00:05:22.736 19:00:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:22.736 { 00:05:22.736 "nbd_device": "/dev/nbd0", 00:05:22.736 "bdev_name": "Malloc0" 00:05:22.736 }, 00:05:22.736 { 00:05:22.736 "nbd_device": "/dev/nbd1", 00:05:22.736 "bdev_name": "Malloc1" 00:05:22.736 } 00:05:22.736 ]' 00:05:22.736 19:00:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.736 19:00:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:22.736 /dev/nbd1' 00:05:22.736 19:00:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:22.736 /dev/nbd1' 00:05:22.736 19:00:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.736 19:00:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:22.736 19:00:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:22.736 19:00:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:22.736 19:00:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:22.736 19:00:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:22.736 19:00:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.736 19:00:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.736 19:00:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:22.736 19:00:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.736 19:00:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:22.736 19:00:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:22.736 256+0 records in 00:05:22.736 256+0 records out 00:05:22.736 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114933 s, 91.2 MB/s 00:05:22.736 19:00:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.736 19:00:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:22.736 256+0 records in 00:05:22.736 256+0 records out 00:05:22.736 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211002 s, 49.7 MB/s 00:05:22.736 19:00:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.736 19:00:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:22.736 256+0 records in 00:05:22.736 256+0 records out 00:05:22.736 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224802 s, 46.6 MB/s 00:05:22.736 19:00:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:22.736 19:00:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.736 19:00:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.736 19:00:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:22.736 19:00:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.736 19:00:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:22.736 19:00:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:22.736 19:00:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.736 19:00:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:22.736 19:00:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.736 19:00:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:22.736 19:00:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.736 19:00:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:22.736 19:00:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.736 19:00:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.736 19:00:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:22.736 19:00:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:22.736 19:00:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.736 19:00:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:23.002 19:00:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:23.002 19:00:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:23.002 19:00:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:23.002 19:00:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.002 19:00:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.002 19:00:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:23.002 19:00:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.002 19:00:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.002 19:00:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.002 19:00:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:23.265 19:00:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:23.265 19:00:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:23.265 19:00:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:23.265 19:00:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.265 19:00:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.265 19:00:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:23.265 19:00:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.265 19:00:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.265 19:00:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.265 19:00:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.265 19:00:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.265 19:00:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:23.265 19:00:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:23.265 19:00:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.524 19:00:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:23.524 19:00:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:23.524 19:00:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.524 19:00:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:23.524 19:00:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:23.524 19:00:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:23.524 19:00:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:23.524 19:00:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:23.524 19:00:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:23.524 19:00:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:23.524 19:00:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:23.782 [2024-07-15 19:00:04.151839] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.041 [2024-07-15 19:00:04.233945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.041 [2024-07-15 19:00:04.233946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.041 [2024-07-15 19:00:04.281113] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:24.041 [2024-07-15 19:00:04.281159] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:26.573 19:00:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:26.573 19:00:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:26.573 spdk_app_start Round 2 00:05:26.573 19:00:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 656969 /var/tmp/spdk-nbd.sock 00:05:26.573 19:00:06 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 656969 ']' 00:05:26.573 19:00:06 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:26.573 19:00:06 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.573 19:00:06 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:26.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:26.573 19:00:06 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.573 19:00:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:26.832 19:00:07 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.832 19:00:07 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:26.832 19:00:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.091 Malloc0 00:05:27.091 19:00:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.091 Malloc1 00:05:27.348 19:00:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.348 19:00:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.348 19:00:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.348 19:00:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:27.348 19:00:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.348 19:00:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:27.348 19:00:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.348 19:00:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.348 19:00:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.348 19:00:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:27.348 19:00:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.348 19:00:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:27.348 19:00:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:27.348 19:00:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:27.348 19:00:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.348 19:00:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:27.348 /dev/nbd0 00:05:27.348 19:00:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:27.348 19:00:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:27.348 19:00:07 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:27.348 19:00:07 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:27.348 19:00:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:27.348 19:00:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:27.348 19:00:07 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:27.348 19:00:07 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:27.348 19:00:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:27.348 19:00:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:27.348 19:00:07 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.348 1+0 records in 00:05:27.348 1+0 records out 00:05:27.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224664 s, 18.2 MB/s 00:05:27.348 19:00:07 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:27.348 19:00:07 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:27.348 19:00:07 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:27.348 19:00:07 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:27.348 19:00:07 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:27.348 19:00:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.348 19:00:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.348 19:00:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:27.606 /dev/nbd1 00:05:27.606 19:00:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:27.606 19:00:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:27.606 19:00:07 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:27.606 19:00:07 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:27.606 19:00:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:27.606 19:00:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:27.606 19:00:07 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:27.606 19:00:07 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:27.606 19:00:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:27.606 19:00:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:27.606 19:00:07 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.606 1+0 records in 00:05:27.606 1+0 records out 00:05:27.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245044 s, 16.7 MB/s 00:05:27.606 19:00:07 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:27.606 19:00:07 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:27.606 19:00:07 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:27.606 19:00:07 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:27.606 19:00:07 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:27.606 19:00:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.606 19:00:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.606 19:00:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.606 19:00:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.606 19:00:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.864 19:00:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:27.864 { 00:05:27.864 "nbd_device": "/dev/nbd0", 00:05:27.864 "bdev_name": "Malloc0" 00:05:27.864 }, 00:05:27.864 { 00:05:27.864 "nbd_device": "/dev/nbd1", 00:05:27.864 "bdev_name": "Malloc1" 00:05:27.864 } 00:05:27.864 ]' 00:05:27.864 19:00:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:27.864 { 00:05:27.864 "nbd_device": "/dev/nbd0", 00:05:27.864 "bdev_name": "Malloc0" 00:05:27.864 }, 00:05:27.864 { 00:05:27.864 "nbd_device": "/dev/nbd1", 00:05:27.864 "bdev_name": "Malloc1" 00:05:27.864 } 00:05:27.864 ]' 00:05:27.864 19:00:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.864 19:00:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:27.864 /dev/nbd1' 00:05:27.864 19:00:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:27.864 /dev/nbd1' 00:05:27.864 19:00:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.864 19:00:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:27.864 19:00:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:27.864 19:00:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:27.864 19:00:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:27.864 19:00:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:27.864 19:00:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.864 19:00:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.864 19:00:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:27.864 19:00:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.864 19:00:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:27.864 19:00:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:27.864 256+0 records in 00:05:27.864 256+0 records out 00:05:27.864 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112074 s, 93.6 MB/s 00:05:27.864 19:00:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.864 19:00:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:27.864 256+0 records in 00:05:27.864 256+0 records out 00:05:27.864 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211486 s, 49.6 MB/s 00:05:27.864 19:00:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.864 19:00:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:27.864 256+0 records in 00:05:27.864 256+0 records out 00:05:27.864 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227565 s, 46.1 MB/s 00:05:27.864 19:00:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:27.864 19:00:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.864 19:00:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.864 19:00:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:27.864 19:00:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.864 19:00:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:27.864 19:00:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:27.864 19:00:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.865 19:00:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:27.865 19:00:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.865 19:00:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:27.865 19:00:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.865 19:00:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:27.865 19:00:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.865 19:00:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.865 19:00:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:27.865 19:00:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:27.865 19:00:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.865 19:00:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:28.122 19:00:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:28.122 19:00:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:28.122 19:00:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:28.122 19:00:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.122 19:00:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.122 19:00:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:28.122 19:00:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.122 19:00:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.122 19:00:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.122 19:00:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:28.381 19:00:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:28.381 19:00:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:28.381 19:00:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:28.381 19:00:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.381 19:00:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.381 19:00:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:28.381 19:00:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.381 19:00:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.381 19:00:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.381 19:00:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.381 19:00:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.639 19:00:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:28.639 19:00:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:28.639 19:00:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.639 19:00:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:28.639 19:00:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.639 19:00:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:28.639 19:00:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:28.639 19:00:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:28.639 19:00:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:28.639 19:00:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:28.639 19:00:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:28.639 19:00:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:28.639 19:00:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:28.897 19:00:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:28.897 [2024-07-15 19:00:09.325365] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.155 [2024-07-15 19:00:09.406534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.155 [2024-07-15 19:00:09.406535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.155 [2024-07-15 19:00:09.452797] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:29.156 [2024-07-15 19:00:09.452843] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:32.439 19:00:12 event.app_repeat -- event/event.sh@38 -- # waitforlisten 656969 /var/tmp/spdk-nbd.sock 00:05:32.439 19:00:12 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 656969 ']' 00:05:32.439 19:00:12 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.439 19:00:12 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.439 19:00:12 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.439 19:00:12 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.439 19:00:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:32.439 19:00:12 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.439 19:00:12 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:32.439 19:00:12 event.app_repeat -- event/event.sh@39 -- # killprocess 656969 00:05:32.439 19:00:12 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 656969 ']' 00:05:32.439 19:00:12 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 656969 00:05:32.439 19:00:12 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:32.439 19:00:12 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:32.439 19:00:12 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 656969 00:05:32.439 19:00:12 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:32.439 19:00:12 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:32.439 19:00:12 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 656969' 00:05:32.439 killing process with pid 656969 00:05:32.439 19:00:12 event.app_repeat -- common/autotest_common.sh@967 -- # kill 656969 00:05:32.439 19:00:12 event.app_repeat -- common/autotest_common.sh@972 -- # wait 656969 00:05:32.439 spdk_app_start is called in Round 0. 00:05:32.439 Shutdown signal received, stop current app iteration 00:05:32.439 Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 reinitialization... 00:05:32.439 spdk_app_start is called in Round 1. 00:05:32.439 Shutdown signal received, stop current app iteration 00:05:32.439 Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 reinitialization... 00:05:32.439 spdk_app_start is called in Round 2. 00:05:32.439 Shutdown signal received, stop current app iteration 00:05:32.439 Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 reinitialization... 00:05:32.439 spdk_app_start is called in Round 3. 00:05:32.439 Shutdown signal received, stop current app iteration 00:05:32.439 19:00:12 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:32.439 19:00:12 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:32.439 00:05:32.439 real 0m16.758s 00:05:32.439 user 0m35.576s 00:05:32.439 sys 0m3.324s 00:05:32.439 19:00:12 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.439 19:00:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:32.439 ************************************ 00:05:32.439 END TEST app_repeat 00:05:32.439 ************************************ 00:05:32.439 19:00:12 event -- common/autotest_common.sh@1142 -- # return 0 00:05:32.439 19:00:12 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:32.439 19:00:12 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:32.439 19:00:12 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.439 19:00:12 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.439 19:00:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.439 ************************************ 00:05:32.439 START TEST cpu_locks 00:05:32.439 ************************************ 00:05:32.439 19:00:12 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:32.439 * Looking for test storage... 00:05:32.439 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:32.439 19:00:12 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:32.439 19:00:12 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:32.439 19:00:12 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:32.439 19:00:12 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:32.439 19:00:12 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.439 19:00:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.439 19:00:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.439 ************************************ 00:05:32.439 START TEST default_locks 00:05:32.439 ************************************ 00:05:32.439 19:00:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:32.439 19:00:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=659907 00:05:32.439 19:00:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.439 19:00:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 659907 00:05:32.439 19:00:12 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 659907 ']' 00:05:32.439 19:00:12 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.439 19:00:12 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.439 19:00:12 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.439 19:00:12 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.439 19:00:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.439 [2024-07-15 19:00:12.821728] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:32.439 [2024-07-15 19:00:12.821796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid659907 ] 00:05:32.439 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.698 [2024-07-15 19:00:12.905143] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.698 [2024-07-15 19:00:12.993700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.278 19:00:13 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.278 19:00:13 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:33.278 19:00:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 659907 00:05:33.278 19:00:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 659907 00:05:33.278 19:00:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:33.844 lslocks: write error 00:05:33.844 19:00:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 659907 00:05:33.844 19:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 659907 ']' 00:05:33.844 19:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 659907 00:05:33.844 19:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:33.844 19:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:33.844 19:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 659907 00:05:33.844 19:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:33.844 19:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:33.844 19:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 659907' 00:05:33.844 killing process with pid 659907 00:05:33.844 19:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 659907 00:05:33.844 19:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 659907 00:05:34.103 19:00:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 659907 00:05:34.103 19:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:34.103 19:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 659907 00:05:34.103 19:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:34.103 19:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.103 19:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:34.103 19:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.103 19:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 659907 00:05:34.103 19:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 659907 ']' 00:05:34.103 19:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.103 19:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.103 19:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.103 19:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.103 19:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.103 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (659907) - No such process 00:05:34.103 ERROR: process (pid: 659907) is no longer running 00:05:34.103 19:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.103 19:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:34.103 19:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:34.103 19:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:34.103 19:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:34.103 19:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:34.103 19:00:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:34.103 19:00:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:34.103 19:00:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:34.103 19:00:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:34.103 00:05:34.103 real 0m1.689s 00:05:34.103 user 0m1.741s 00:05:34.103 sys 0m0.615s 00:05:34.103 19:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.103 19:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.103 ************************************ 00:05:34.103 END TEST default_locks 00:05:34.103 ************************************ 00:05:34.103 19:00:14 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:34.103 19:00:14 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:34.103 19:00:14 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.103 19:00:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.103 19:00:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.363 ************************************ 00:05:34.363 START TEST default_locks_via_rpc 00:05:34.363 ************************************ 00:05:34.363 19:00:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:34.363 19:00:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=660134 00:05:34.363 19:00:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 660134 00:05:34.363 19:00:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:34.363 19:00:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 660134 ']' 00:05:34.363 19:00:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.363 19:00:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.363 19:00:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.363 19:00:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.363 19:00:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.363 [2024-07-15 19:00:14.589172] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:34.363 [2024-07-15 19:00:14.589246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid660134 ] 00:05:34.363 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.363 [2024-07-15 19:00:14.672457] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.363 [2024-07-15 19:00:14.762822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.300 19:00:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.300 19:00:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:35.300 19:00:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:35.300 19:00:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.300 19:00:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.300 19:00:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.300 19:00:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:35.300 19:00:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:35.300 19:00:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:35.300 19:00:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:35.300 19:00:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:35.300 19:00:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.300 19:00:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.300 19:00:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.300 19:00:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 660134 00:05:35.300 19:00:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 660134 00:05:35.300 19:00:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.559 19:00:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 660134 00:05:35.559 19:00:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 660134 ']' 00:05:35.559 19:00:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 660134 00:05:35.559 19:00:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:35.559 19:00:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:35.559 19:00:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 660134 00:05:35.559 19:00:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:35.559 19:00:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:35.559 19:00:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 660134' 00:05:35.559 killing process with pid 660134 00:05:35.559 19:00:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 660134 00:05:35.559 19:00:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 660134 00:05:36.127 00:05:36.127 real 0m1.743s 00:05:36.127 user 0m1.799s 00:05:36.127 sys 0m0.625s 00:05:36.127 19:00:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.127 19:00:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.127 ************************************ 00:05:36.127 END TEST default_locks_via_rpc 00:05:36.127 ************************************ 00:05:36.127 19:00:16 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:36.127 19:00:16 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:36.127 19:00:16 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.127 19:00:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.127 19:00:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.127 ************************************ 00:05:36.127 START TEST non_locking_app_on_locked_coremask 00:05:36.127 ************************************ 00:05:36.127 19:00:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:36.127 19:00:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=660514 00:05:36.127 19:00:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 660514 /var/tmp/spdk.sock 00:05:36.127 19:00:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.127 19:00:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 660514 ']' 00:05:36.127 19:00:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.127 19:00:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.127 19:00:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.128 19:00:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.128 19:00:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.128 [2024-07-15 19:00:16.421054] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:36.128 [2024-07-15 19:00:16.421122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid660514 ] 00:05:36.128 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.128 [2024-07-15 19:00:16.506545] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.386 [2024-07-15 19:00:16.595051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.957 19:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.957 19:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:36.957 19:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=660528 00:05:36.957 19:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 660528 /var/tmp/spdk2.sock 00:05:36.957 19:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:36.957 19:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 660528 ']' 00:05:36.957 19:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.957 19:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.957 19:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.957 19:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.957 19:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.957 [2024-07-15 19:00:17.281005] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:36.957 [2024-07-15 19:00:17.281083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid660528 ] 00:05:36.957 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.957 [2024-07-15 19:00:17.377916] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:36.957 [2024-07-15 19:00:17.377946] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.216 [2024-07-15 19:00:17.538349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.782 19:00:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.782 19:00:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:37.782 19:00:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 660514 00:05:37.782 19:00:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 660514 00:05:37.782 19:00:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.156 lslocks: write error 00:05:39.156 19:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 660514 00:05:39.156 19:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 660514 ']' 00:05:39.156 19:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 660514 00:05:39.156 19:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:39.156 19:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:39.156 19:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 660514 00:05:39.156 19:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:39.156 19:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:39.156 19:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 660514' 00:05:39.156 killing process with pid 660514 00:05:39.156 19:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 660514 00:05:39.156 19:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 660514 00:05:39.722 19:00:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 660528 00:05:39.722 19:00:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 660528 ']' 00:05:39.722 19:00:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 660528 00:05:39.722 19:00:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:39.722 19:00:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:39.722 19:00:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 660528 00:05:39.722 19:00:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:39.722 19:00:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:39.722 19:00:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 660528' 00:05:39.722 killing process with pid 660528 00:05:39.722 19:00:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 660528 00:05:39.722 19:00:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 660528 00:05:40.288 00:05:40.288 real 0m4.087s 00:05:40.288 user 0m4.319s 00:05:40.288 sys 0m1.352s 00:05:40.288 19:00:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.288 19:00:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.288 ************************************ 00:05:40.288 END TEST non_locking_app_on_locked_coremask 00:05:40.288 ************************************ 00:05:40.288 19:00:20 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:40.288 19:00:20 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:40.288 19:00:20 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.288 19:00:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.288 19:00:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.288 ************************************ 00:05:40.288 START TEST locking_app_on_unlocked_coremask 00:05:40.288 ************************************ 00:05:40.288 19:00:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:40.288 19:00:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=661101 00:05:40.288 19:00:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 661101 /var/tmp/spdk.sock 00:05:40.288 19:00:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:40.288 19:00:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 661101 ']' 00:05:40.288 19:00:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.288 19:00:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.288 19:00:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.288 19:00:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.288 19:00:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.288 [2024-07-15 19:00:20.594392] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:40.288 [2024-07-15 19:00:20.594491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661101 ] 00:05:40.288 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.288 [2024-07-15 19:00:20.679029] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:40.288 [2024-07-15 19:00:20.679063] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.547 [2024-07-15 19:00:20.768897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.111 19:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.111 19:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:41.111 19:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=661120 00:05:41.111 19:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 661120 /var/tmp/spdk2.sock 00:05:41.111 19:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:41.111 19:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 661120 ']' 00:05:41.111 19:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.111 19:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.111 19:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.111 19:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.111 19:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.111 [2024-07-15 19:00:21.448572] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:41.111 [2024-07-15 19:00:21.448668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661120 ] 00:05:41.111 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.371 [2024-07-15 19:00:21.542030] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.371 [2024-07-15 19:00:21.706391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.023 19:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.023 19:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:42.023 19:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 661120 00:05:42.023 19:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 661120 00:05:42.023 19:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.956 lslocks: write error 00:05:42.956 19:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 661101 00:05:42.956 19:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 661101 ']' 00:05:42.956 19:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 661101 00:05:42.956 19:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:42.956 19:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.956 19:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 661101 00:05:42.956 19:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:42.956 19:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:42.956 19:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 661101' 00:05:42.956 killing process with pid 661101 00:05:42.956 19:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 661101 00:05:42.956 19:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 661101 00:05:43.522 19:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 661120 00:05:43.522 19:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 661120 ']' 00:05:43.522 19:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 661120 00:05:43.522 19:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:43.522 19:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.522 19:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 661120 00:05:43.522 19:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:43.522 19:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:43.522 19:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 661120' 00:05:43.522 killing process with pid 661120 00:05:43.522 19:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 661120 00:05:43.523 19:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 661120 00:05:43.780 00:05:43.780 real 0m3.591s 00:05:43.780 user 0m3.744s 00:05:43.780 sys 0m1.172s 00:05:43.780 19:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.780 19:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.780 ************************************ 00:05:43.780 END TEST locking_app_on_unlocked_coremask 00:05:43.780 ************************************ 00:05:43.780 19:00:24 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:43.780 19:00:24 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:43.780 19:00:24 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.780 19:00:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.780 19:00:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.037 ************************************ 00:05:44.037 START TEST locking_app_on_locked_coremask 00:05:44.037 ************************************ 00:05:44.037 19:00:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:44.037 19:00:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=661519 00:05:44.037 19:00:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 661519 /var/tmp/spdk.sock 00:05:44.037 19:00:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.037 19:00:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 661519 ']' 00:05:44.037 19:00:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.037 19:00:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.037 19:00:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.037 19:00:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.037 19:00:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.037 [2024-07-15 19:00:24.272393] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:44.037 [2024-07-15 19:00:24.272479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661519 ] 00:05:44.037 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.037 [2024-07-15 19:00:24.358753] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.037 [2024-07-15 19:00:24.448671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.972 19:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.972 19:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:44.972 19:00:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=661701 00:05:44.972 19:00:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 661701 /var/tmp/spdk2.sock 00:05:44.972 19:00:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:44.972 19:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:44.972 19:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 661701 /var/tmp/spdk2.sock 00:05:44.972 19:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:44.972 19:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.972 19:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:44.972 19:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.972 19:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 661701 /var/tmp/spdk2.sock 00:05:44.972 19:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 661701 ']' 00:05:44.972 19:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.972 19:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.972 19:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.972 19:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.972 19:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.972 [2024-07-15 19:00:25.138156] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:44.972 [2024-07-15 19:00:25.138248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661701 ] 00:05:44.972 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.972 [2024-07-15 19:00:25.235369] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 661519 has claimed it. 00:05:44.972 [2024-07-15 19:00:25.235411] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:45.537 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (661701) - No such process 00:05:45.537 ERROR: process (pid: 661701) is no longer running 00:05:45.537 19:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.537 19:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:45.537 19:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:45.537 19:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:45.538 19:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:45.538 19:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:45.538 19:00:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 661519 00:05:45.538 19:00:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 661519 00:05:45.538 19:00:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.103 lslocks: write error 00:05:46.103 19:00:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 661519 00:05:46.103 19:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 661519 ']' 00:05:46.103 19:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 661519 00:05:46.103 19:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:46.103 19:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.103 19:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 661519 00:05:46.103 19:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:46.103 19:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:46.103 19:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 661519' 00:05:46.103 killing process with pid 661519 00:05:46.103 19:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 661519 00:05:46.103 19:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 661519 00:05:46.362 00:05:46.362 real 0m2.515s 00:05:46.362 user 0m2.685s 00:05:46.362 sys 0m0.799s 00:05:46.362 19:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.362 19:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.362 ************************************ 00:05:46.362 END TEST locking_app_on_locked_coremask 00:05:46.362 ************************************ 00:05:46.626 19:00:26 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:46.626 19:00:26 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:46.626 19:00:26 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.626 19:00:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.626 19:00:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.626 ************************************ 00:05:46.627 START TEST locking_overlapped_coremask 00:05:46.627 ************************************ 00:05:46.627 19:00:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:46.627 19:00:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=661917 00:05:46.627 19:00:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 661917 /var/tmp/spdk.sock 00:05:46.627 19:00:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:46.627 19:00:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 661917 ']' 00:05:46.627 19:00:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.627 19:00:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.627 19:00:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.627 19:00:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.627 19:00:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.627 [2024-07-15 19:00:26.875607] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:46.627 [2024-07-15 19:00:26.875677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661917 ] 00:05:46.627 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.627 [2024-07-15 19:00:26.958512] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:46.627 [2024-07-15 19:00:27.039788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.627 [2024-07-15 19:00:27.039827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.628 [2024-07-15 19:00:27.039827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.570 19:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.570 19:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:47.570 19:00:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=662101 00:05:47.570 19:00:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 662101 /var/tmp/spdk2.sock 00:05:47.570 19:00:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:47.570 19:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:47.570 19:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 662101 /var/tmp/spdk2.sock 00:05:47.570 19:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:47.570 19:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.570 19:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:47.570 19:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.570 19:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 662101 /var/tmp/spdk2.sock 00:05:47.570 19:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 662101 ']' 00:05:47.570 19:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.570 19:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.570 19:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.570 19:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.570 19:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.570 [2024-07-15 19:00:27.734636] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:47.570 [2024-07-15 19:00:27.734727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid662101 ] 00:05:47.570 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.570 [2024-07-15 19:00:27.830451] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 661917 has claimed it. 00:05:47.570 [2024-07-15 19:00:27.830491] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:48.136 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (662101) - No such process 00:05:48.136 ERROR: process (pid: 662101) is no longer running 00:05:48.136 19:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.136 19:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:48.136 19:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:48.136 19:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:48.136 19:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:48.136 19:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:48.136 19:00:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:48.136 19:00:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:48.136 19:00:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:48.136 19:00:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:48.136 19:00:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 661917 00:05:48.136 19:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 661917 ']' 00:05:48.136 19:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 661917 00:05:48.136 19:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:48.136 19:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:48.136 19:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 661917 00:05:48.136 19:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:48.136 19:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:48.136 19:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 661917' 00:05:48.136 killing process with pid 661917 00:05:48.136 19:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 661917 00:05:48.136 19:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 661917 00:05:48.395 00:05:48.395 real 0m1.917s 00:05:48.395 user 0m5.344s 00:05:48.395 sys 0m0.465s 00:05:48.395 19:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.395 19:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.395 ************************************ 00:05:48.395 END TEST locking_overlapped_coremask 00:05:48.395 ************************************ 00:05:48.395 19:00:28 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:48.395 19:00:28 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:48.395 19:00:28 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.395 19:00:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.395 19:00:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.654 ************************************ 00:05:48.655 START TEST locking_overlapped_coremask_via_rpc 00:05:48.655 ************************************ 00:05:48.655 19:00:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:48.655 19:00:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=662309 00:05:48.655 19:00:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 662309 /var/tmp/spdk.sock 00:05:48.655 19:00:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:48.655 19:00:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 662309 ']' 00:05:48.655 19:00:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.655 19:00:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.655 19:00:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.655 19:00:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.655 19:00:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.655 [2024-07-15 19:00:28.879513] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:48.655 [2024-07-15 19:00:28.879584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid662309 ] 00:05:48.655 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.655 [2024-07-15 19:00:28.964929] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:48.655 [2024-07-15 19:00:28.964960] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:48.655 [2024-07-15 19:00:29.053308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.655 [2024-07-15 19:00:29.053409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.655 [2024-07-15 19:00:29.053409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.606 19:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.606 19:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:49.606 19:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:49.606 19:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=662332 00:05:49.606 19:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 662332 /var/tmp/spdk2.sock 00:05:49.606 19:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 662332 ']' 00:05:49.606 19:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:49.606 19:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.606 19:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:49.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:49.606 19:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.606 19:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.606 [2024-07-15 19:00:29.735447] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:49.606 [2024-07-15 19:00:29.735523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid662332 ] 00:05:49.606 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.606 [2024-07-15 19:00:29.833151] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:49.606 [2024-07-15 19:00:29.833181] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:49.606 [2024-07-15 19:00:29.995102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:49.606 [2024-07-15 19:00:29.998265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.606 [2024-07-15 19:00:29.998266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:50.174 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.174 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:50.174 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:50.174 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.174 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.174 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.174 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:50.174 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:50.174 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:50.174 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:50.174 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.174 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:50.174 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.174 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:50.174 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.174 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.174 [2024-07-15 19:00:30.583281] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 662309 has claimed it. 00:05:50.174 request: 00:05:50.174 { 00:05:50.174 "method": "framework_enable_cpumask_locks", 00:05:50.174 "req_id": 1 00:05:50.174 } 00:05:50.174 Got JSON-RPC error response 00:05:50.174 response: 00:05:50.174 { 00:05:50.174 "code": -32603, 00:05:50.174 "message": "Failed to claim CPU core: 2" 00:05:50.174 } 00:05:50.174 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:50.174 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:50.174 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:50.174 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:50.174 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:50.174 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 662309 /var/tmp/spdk.sock 00:05:50.174 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 662309 ']' 00:05:50.174 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.174 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.174 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.174 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.174 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.432 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.432 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:50.432 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 662332 /var/tmp/spdk2.sock 00:05:50.432 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 662332 ']' 00:05:50.432 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.432 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.432 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.432 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.432 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.690 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.690 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:50.690 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:50.690 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:50.691 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:50.691 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:50.691 00:05:50.691 real 0m2.121s 00:05:50.691 user 0m0.836s 00:05:50.691 sys 0m0.216s 00:05:50.691 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.691 19:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.691 ************************************ 00:05:50.691 END TEST locking_overlapped_coremask_via_rpc 00:05:50.691 ************************************ 00:05:50.691 19:00:31 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:50.691 19:00:31 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:50.691 19:00:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 662309 ]] 00:05:50.691 19:00:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 662309 00:05:50.691 19:00:31 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 662309 ']' 00:05:50.691 19:00:31 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 662309 00:05:50.691 19:00:31 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:50.691 19:00:31 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:50.691 19:00:31 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 662309 00:05:50.691 19:00:31 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:50.691 19:00:31 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:50.691 19:00:31 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 662309' 00:05:50.691 killing process with pid 662309 00:05:50.691 19:00:31 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 662309 00:05:50.691 19:00:31 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 662309 00:05:51.258 19:00:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 662332 ]] 00:05:51.258 19:00:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 662332 00:05:51.258 19:00:31 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 662332 ']' 00:05:51.258 19:00:31 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 662332 00:05:51.258 19:00:31 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:51.258 19:00:31 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.258 19:00:31 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 662332 00:05:51.258 19:00:31 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:51.258 19:00:31 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:51.258 19:00:31 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 662332' 00:05:51.258 killing process with pid 662332 00:05:51.258 19:00:31 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 662332 00:05:51.258 19:00:31 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 662332 00:05:51.516 19:00:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:51.516 19:00:31 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:51.516 19:00:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 662309 ]] 00:05:51.516 19:00:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 662309 00:05:51.516 19:00:31 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 662309 ']' 00:05:51.516 19:00:31 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 662309 00:05:51.516 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (662309) - No such process 00:05:51.516 19:00:31 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 662309 is not found' 00:05:51.516 Process with pid 662309 is not found 00:05:51.516 19:00:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 662332 ]] 00:05:51.516 19:00:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 662332 00:05:51.516 19:00:31 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 662332 ']' 00:05:51.516 19:00:31 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 662332 00:05:51.516 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (662332) - No such process 00:05:51.516 19:00:31 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 662332 is not found' 00:05:51.516 Process with pid 662332 is not found 00:05:51.516 19:00:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:51.516 00:05:51.516 real 0m19.178s 00:05:51.516 user 0m31.284s 00:05:51.516 sys 0m6.356s 00:05:51.516 19:00:31 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.516 19:00:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.516 ************************************ 00:05:51.516 END TEST cpu_locks 00:05:51.516 ************************************ 00:05:51.516 19:00:31 event -- common/autotest_common.sh@1142 -- # return 0 00:05:51.516 00:05:51.516 real 0m45.501s 00:05:51.516 user 1m24.077s 00:05:51.516 sys 0m10.905s 00:05:51.516 19:00:31 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.516 19:00:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.516 ************************************ 00:05:51.516 END TEST event 00:05:51.516 ************************************ 00:05:51.516 19:00:31 -- common/autotest_common.sh@1142 -- # return 0 00:05:51.516 19:00:31 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:51.516 19:00:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.516 19:00:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.516 19:00:31 -- common/autotest_common.sh@10 -- # set +x 00:05:51.776 ************************************ 00:05:51.776 START TEST thread 00:05:51.776 ************************************ 00:05:51.776 19:00:31 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:51.776 * Looking for test storage... 00:05:51.776 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:05:51.776 19:00:32 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:51.776 19:00:32 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:51.776 19:00:32 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.776 19:00:32 thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.776 ************************************ 00:05:51.776 START TEST thread_poller_perf 00:05:51.776 ************************************ 00:05:51.776 19:00:32 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:51.776 [2024-07-15 19:00:32.118756] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:51.776 [2024-07-15 19:00:32.118865] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid662785 ] 00:05:51.776 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.034 [2024-07-15 19:00:32.206948] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.034 [2024-07-15 19:00:32.287663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.034 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:52.965 ====================================== 00:05:52.965 busy:2303184960 (cyc) 00:05:52.965 total_run_count: 857000 00:05:52.965 tsc_hz: 2300000000 (cyc) 00:05:52.965 ====================================== 00:05:52.965 poller_cost: 2687 (cyc), 1168 (nsec) 00:05:52.965 00:05:52.965 real 0m1.260s 00:05:52.965 user 0m1.150s 00:05:52.965 sys 0m0.106s 00:05:52.965 19:00:33 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.965 19:00:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:52.965 ************************************ 00:05:52.965 END TEST thread_poller_perf 00:05:52.965 ************************************ 00:05:53.224 19:00:33 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:53.224 19:00:33 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:53.224 19:00:33 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:53.224 19:00:33 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.224 19:00:33 thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.224 ************************************ 00:05:53.224 START TEST thread_poller_perf 00:05:53.224 ************************************ 00:05:53.224 19:00:33 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:53.224 [2024-07-15 19:00:33.462732] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:53.224 [2024-07-15 19:00:33.462814] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid662986 ] 00:05:53.224 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.224 [2024-07-15 19:00:33.552878] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.224 [2024-07-15 19:00:33.635938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.224 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:54.599 ====================================== 00:05:54.599 busy:2301444410 (cyc) 00:05:54.599 total_run_count: 14175000 00:05:54.599 tsc_hz: 2300000000 (cyc) 00:05:54.599 ====================================== 00:05:54.599 poller_cost: 162 (cyc), 70 (nsec) 00:05:54.599 00:05:54.599 real 0m1.264s 00:05:54.599 user 0m1.151s 00:05:54.599 sys 0m0.108s 00:05:54.599 19:00:34 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.599 19:00:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:54.599 ************************************ 00:05:54.599 END TEST thread_poller_perf 00:05:54.599 ************************************ 00:05:54.599 19:00:34 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:54.599 19:00:34 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:05:54.599 19:00:34 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:54.599 19:00:34 thread -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.599 19:00:34 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.599 19:00:34 thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.599 ************************************ 00:05:54.599 START TEST thread_spdk_lock 00:05:54.599 ************************************ 00:05:54.599 19:00:34 thread.thread_spdk_lock -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:54.599 [2024-07-15 19:00:34.807317] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:54.599 [2024-07-15 19:00:34.807397] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid663193 ] 00:05:54.599 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.599 [2024-07-15 19:00:34.895062] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.599 [2024-07-15 19:00:34.977928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.599 [2024-07-15 19:00:34.977930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.167 [2024-07-15 19:00:35.472691] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:55.167 [2024-07-15 19:00:35.472727] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:05:55.167 [2024-07-15 19:00:35.472737] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x14cdec0 00:05:55.167 [2024-07-15 19:00:35.473589] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:55.167 [2024-07-15 19:00:35.473692] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:55.167 [2024-07-15 19:00:35.473711] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:55.167 Starting test contend 00:05:55.167 Worker Delay Wait us Hold us Total us 00:05:55.167 0 3 180272 188849 369122 00:05:55.167 1 5 97112 289174 386287 00:05:55.167 PASS test contend 00:05:55.167 Starting test hold_by_poller 00:05:55.167 PASS test hold_by_poller 00:05:55.167 Starting test hold_by_message 00:05:55.167 PASS test hold_by_message 00:05:55.167 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:05:55.167 100014 assertions passed 00:05:55.167 0 assertions failed 00:05:55.167 00:05:55.167 real 0m0.750s 00:05:55.167 user 0m1.132s 00:05:55.167 sys 0m0.109s 00:05:55.167 19:00:35 thread.thread_spdk_lock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.167 19:00:35 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:05:55.167 ************************************ 00:05:55.167 END TEST thread_spdk_lock 00:05:55.167 ************************************ 00:05:55.167 19:00:35 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:55.167 00:05:55.167 real 0m3.630s 00:05:55.167 user 0m3.557s 00:05:55.167 sys 0m0.585s 00:05:55.167 19:00:35 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.167 19:00:35 thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.167 ************************************ 00:05:55.167 END TEST thread 00:05:55.167 ************************************ 00:05:55.426 19:00:35 -- common/autotest_common.sh@1142 -- # return 0 00:05:55.426 19:00:35 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:05:55.426 19:00:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.426 19:00:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.426 19:00:35 -- common/autotest_common.sh@10 -- # set +x 00:05:55.426 ************************************ 00:05:55.426 START TEST accel 00:05:55.426 ************************************ 00:05:55.426 19:00:35 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:05:55.426 * Looking for test storage... 00:05:55.426 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:05:55.426 19:00:35 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:55.426 19:00:35 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:55.426 19:00:35 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:55.426 19:00:35 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=663433 00:05:55.426 19:00:35 accel -- accel/accel.sh@63 -- # waitforlisten 663433 00:05:55.426 19:00:35 accel -- common/autotest_common.sh@829 -- # '[' -z 663433 ']' 00:05:55.426 19:00:35 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.426 19:00:35 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:55.426 19:00:35 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.426 19:00:35 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:55.426 19:00:35 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.426 19:00:35 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.426 19:00:35 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.426 19:00:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.426 19:00:35 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.426 19:00:35 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.426 19:00:35 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.426 19:00:35 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.426 19:00:35 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:55.426 19:00:35 accel -- accel/accel.sh@41 -- # jq -r . 00:05:55.426 [2024-07-15 19:00:35.807252] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:55.426 [2024-07-15 19:00:35.807327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid663433 ] 00:05:55.426 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.684 [2024-07-15 19:00:35.878674] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.684 [2024-07-15 19:00:35.959685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.250 19:00:36 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.250 19:00:36 accel -- common/autotest_common.sh@862 -- # return 0 00:05:56.250 19:00:36 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:56.250 19:00:36 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:56.250 19:00:36 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:56.250 19:00:36 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:56.250 19:00:36 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:56.250 19:00:36 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:56.250 19:00:36 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:56.250 19:00:36 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.250 19:00:36 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.250 19:00:36 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.250 19:00:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.250 19:00:36 accel -- accel/accel.sh@72 -- # IFS== 00:05:56.250 19:00:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:56.250 19:00:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:56.509 19:00:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.509 19:00:36 accel -- accel/accel.sh@72 -- # IFS== 00:05:56.509 19:00:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:56.509 19:00:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:56.509 19:00:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.509 19:00:36 accel -- accel/accel.sh@72 -- # IFS== 00:05:56.509 19:00:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:56.509 19:00:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:56.509 19:00:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.509 19:00:36 accel -- accel/accel.sh@72 -- # IFS== 00:05:56.509 19:00:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:56.509 19:00:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:56.509 19:00:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.509 19:00:36 accel -- accel/accel.sh@72 -- # IFS== 00:05:56.509 19:00:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:56.509 19:00:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:56.509 19:00:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.509 19:00:36 accel -- accel/accel.sh@72 -- # IFS== 00:05:56.509 19:00:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:56.509 19:00:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:56.509 19:00:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.509 19:00:36 accel -- accel/accel.sh@72 -- # IFS== 00:05:56.509 19:00:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:56.509 19:00:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:56.509 19:00:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.509 19:00:36 accel -- accel/accel.sh@72 -- # IFS== 00:05:56.509 19:00:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:56.509 19:00:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:56.509 19:00:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.509 19:00:36 accel -- accel/accel.sh@72 -- # IFS== 00:05:56.509 19:00:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:56.509 19:00:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:56.509 19:00:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.509 19:00:36 accel -- accel/accel.sh@72 -- # IFS== 00:05:56.509 19:00:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:56.509 19:00:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:56.509 19:00:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.509 19:00:36 accel -- accel/accel.sh@72 -- # IFS== 00:05:56.509 19:00:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:56.509 19:00:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:56.509 19:00:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.509 19:00:36 accel -- accel/accel.sh@72 -- # IFS== 00:05:56.509 19:00:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:56.509 19:00:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:56.509 19:00:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.509 19:00:36 accel -- accel/accel.sh@72 -- # IFS== 00:05:56.509 19:00:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:56.509 19:00:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:56.509 19:00:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.509 19:00:36 accel -- accel/accel.sh@72 -- # IFS== 00:05:56.509 19:00:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:56.509 19:00:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:56.509 19:00:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.509 19:00:36 accel -- accel/accel.sh@72 -- # IFS== 00:05:56.509 19:00:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:56.509 19:00:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:56.509 19:00:36 accel -- accel/accel.sh@75 -- # killprocess 663433 00:05:56.509 19:00:36 accel -- common/autotest_common.sh@948 -- # '[' -z 663433 ']' 00:05:56.509 19:00:36 accel -- common/autotest_common.sh@952 -- # kill -0 663433 00:05:56.509 19:00:36 accel -- common/autotest_common.sh@953 -- # uname 00:05:56.509 19:00:36 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.509 19:00:36 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 663433 00:05:56.509 19:00:36 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:56.509 19:00:36 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:56.509 19:00:36 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 663433' 00:05:56.509 killing process with pid 663433 00:05:56.509 19:00:36 accel -- common/autotest_common.sh@967 -- # kill 663433 00:05:56.509 19:00:36 accel -- common/autotest_common.sh@972 -- # wait 663433 00:05:56.767 19:00:37 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:56.768 19:00:37 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:56.768 19:00:37 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:56.768 19:00:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.768 19:00:37 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.768 19:00:37 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:56.768 19:00:37 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:56.768 19:00:37 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:56.768 19:00:37 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.768 19:00:37 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.768 19:00:37 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.768 19:00:37 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.768 19:00:37 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.768 19:00:37 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:56.768 19:00:37 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:56.768 19:00:37 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.768 19:00:37 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:56.768 19:00:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:56.768 19:00:37 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:56.768 19:00:37 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:56.768 19:00:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.768 19:00:37 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.027 ************************************ 00:05:57.027 START TEST accel_missing_filename 00:05:57.027 ************************************ 00:05:57.027 19:00:37 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:57.027 19:00:37 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:57.027 19:00:37 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:57.027 19:00:37 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:57.027 19:00:37 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.027 19:00:37 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:57.027 19:00:37 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.027 19:00:37 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:57.027 19:00:37 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:57.027 19:00:37 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:57.027 19:00:37 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.027 19:00:37 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.027 19:00:37 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.027 19:00:37 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.027 19:00:37 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.027 19:00:37 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:57.027 19:00:37 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:57.027 [2024-07-15 19:00:37.232379] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:57.027 [2024-07-15 19:00:37.232480] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid663649 ] 00:05:57.027 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.027 [2024-07-15 19:00:37.320090] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.027 [2024-07-15 19:00:37.405280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.027 [2024-07-15 19:00:37.451703] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:57.285 [2024-07-15 19:00:37.521214] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:57.285 A filename is required. 00:05:57.285 19:00:37 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:57.285 19:00:37 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:57.285 19:00:37 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:57.285 19:00:37 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:57.285 19:00:37 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:57.285 19:00:37 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:57.285 00:05:57.285 real 0m0.387s 00:05:57.285 user 0m0.256s 00:05:57.285 sys 0m0.167s 00:05:57.285 19:00:37 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.285 19:00:37 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:57.285 ************************************ 00:05:57.285 END TEST accel_missing_filename 00:05:57.285 ************************************ 00:05:57.285 19:00:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:57.285 19:00:37 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:57.285 19:00:37 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:57.285 19:00:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.285 19:00:37 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.285 ************************************ 00:05:57.285 START TEST accel_compress_verify 00:05:57.285 ************************************ 00:05:57.285 19:00:37 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:57.285 19:00:37 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:57.285 19:00:37 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:57.285 19:00:37 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:57.285 19:00:37 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.285 19:00:37 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:57.285 19:00:37 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.285 19:00:37 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:57.285 19:00:37 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:57.285 19:00:37 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:57.285 19:00:37 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.285 19:00:37 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.285 19:00:37 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.285 19:00:37 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.285 19:00:37 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.285 19:00:37 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:57.285 19:00:37 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:57.285 [2024-07-15 19:00:37.699933] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:57.285 [2024-07-15 19:00:37.700017] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid663670 ] 00:05:57.545 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.545 [2024-07-15 19:00:37.788595] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.545 [2024-07-15 19:00:37.872723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.545 [2024-07-15 19:00:37.915483] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:57.803 [2024-07-15 19:00:37.975515] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:57.803 00:05:57.803 Compression does not support the verify option, aborting. 00:05:57.803 19:00:38 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:57.803 19:00:38 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:57.803 19:00:38 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:57.803 19:00:38 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:57.803 19:00:38 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:57.803 19:00:38 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:57.803 00:05:57.803 real 0m0.369s 00:05:57.803 user 0m0.243s 00:05:57.803 sys 0m0.166s 00:05:57.803 19:00:38 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.803 19:00:38 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:57.803 ************************************ 00:05:57.803 END TEST accel_compress_verify 00:05:57.803 ************************************ 00:05:57.803 19:00:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:57.803 19:00:38 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:57.803 19:00:38 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:57.803 19:00:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.803 19:00:38 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.803 ************************************ 00:05:57.803 START TEST accel_wrong_workload 00:05:57.803 ************************************ 00:05:57.803 19:00:38 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:57.803 19:00:38 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:57.803 19:00:38 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:57.803 19:00:38 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:57.803 19:00:38 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.803 19:00:38 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:57.803 19:00:38 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.803 19:00:38 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:57.803 19:00:38 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:57.803 19:00:38 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:57.803 19:00:38 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.803 19:00:38 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.803 19:00:38 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.803 19:00:38 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.803 19:00:38 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.803 19:00:38 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:57.803 19:00:38 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:57.803 Unsupported workload type: foobar 00:05:57.803 [2024-07-15 19:00:38.145322] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:57.803 accel_perf options: 00:05:57.803 [-h help message] 00:05:57.803 [-q queue depth per core] 00:05:57.803 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:57.803 [-T number of threads per core 00:05:57.803 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:57.803 [-t time in seconds] 00:05:57.803 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:57.803 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:57.803 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:57.803 [-l for compress/decompress workloads, name of uncompressed input file 00:05:57.803 [-S for crc32c workload, use this seed value (default 0) 00:05:57.803 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:57.803 [-f for fill workload, use this BYTE value (default 255) 00:05:57.803 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:57.803 [-y verify result if this switch is on] 00:05:57.803 [-a tasks to allocate per core (default: same value as -q)] 00:05:57.803 Can be used to spread operations across a wider range of memory. 00:05:57.803 19:00:38 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:57.803 19:00:38 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:57.803 19:00:38 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:57.803 19:00:38 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:57.803 00:05:57.803 real 0m0.030s 00:05:57.803 user 0m0.017s 00:05:57.803 sys 0m0.013s 00:05:57.803 19:00:38 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.803 19:00:38 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:57.803 ************************************ 00:05:57.803 END TEST accel_wrong_workload 00:05:57.803 ************************************ 00:05:57.803 Error: writing output failed: Broken pipe 00:05:57.803 19:00:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:57.803 19:00:38 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:57.803 19:00:38 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:57.803 19:00:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.803 19:00:38 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.061 ************************************ 00:05:58.061 START TEST accel_negative_buffers 00:05:58.061 ************************************ 00:05:58.061 19:00:38 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:58.061 19:00:38 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:58.061 19:00:38 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:58.061 19:00:38 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:58.061 19:00:38 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.061 19:00:38 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:58.061 19:00:38 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.061 19:00:38 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:58.061 19:00:38 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:58.061 19:00:38 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:58.061 19:00:38 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.061 19:00:38 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.061 19:00:38 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.061 19:00:38 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.061 19:00:38 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.061 19:00:38 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:58.061 19:00:38 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:58.061 -x option must be non-negative. 00:05:58.061 [2024-07-15 19:00:38.262200] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:58.061 accel_perf options: 00:05:58.061 [-h help message] 00:05:58.061 [-q queue depth per core] 00:05:58.062 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:58.062 [-T number of threads per core 00:05:58.062 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:58.062 [-t time in seconds] 00:05:58.062 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:58.062 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:58.062 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:58.062 [-l for compress/decompress workloads, name of uncompressed input file 00:05:58.062 [-S for crc32c workload, use this seed value (default 0) 00:05:58.062 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:58.062 [-f for fill workload, use this BYTE value (default 255) 00:05:58.062 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:58.062 [-y verify result if this switch is on] 00:05:58.062 [-a tasks to allocate per core (default: same value as -q)] 00:05:58.062 Can be used to spread operations across a wider range of memory. 00:05:58.062 19:00:38 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:58.062 19:00:38 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:58.062 19:00:38 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:58.062 19:00:38 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:58.062 00:05:58.062 real 0m0.031s 00:05:58.062 user 0m0.014s 00:05:58.062 sys 0m0.016s 00:05:58.062 19:00:38 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.062 19:00:38 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:58.062 ************************************ 00:05:58.062 END TEST accel_negative_buffers 00:05:58.062 ************************************ 00:05:58.062 Error: writing output failed: Broken pipe 00:05:58.062 19:00:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:58.062 19:00:38 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:58.062 19:00:38 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:58.062 19:00:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.062 19:00:38 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.062 ************************************ 00:05:58.062 START TEST accel_crc32c 00:05:58.062 ************************************ 00:05:58.062 19:00:38 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:58.062 19:00:38 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:58.062 19:00:38 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:58.062 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.062 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.062 19:00:38 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:58.062 19:00:38 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:58.062 19:00:38 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:58.062 19:00:38 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.062 19:00:38 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.062 19:00:38 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.062 19:00:38 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.062 19:00:38 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.062 19:00:38 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:58.062 19:00:38 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:58.062 [2024-07-15 19:00:38.376069] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:58.062 [2024-07-15 19:00:38.376158] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid663852 ] 00:05:58.062 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.062 [2024-07-15 19:00:38.463760] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.320 [2024-07-15 19:00:38.560291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.320 19:00:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.693 19:00:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.693 19:00:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.693 19:00:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.693 19:00:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.693 19:00:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.693 19:00:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.693 19:00:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.693 19:00:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.693 19:00:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.693 19:00:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.693 19:00:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.693 19:00:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.693 19:00:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.693 19:00:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.693 19:00:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.694 19:00:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.694 19:00:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.694 19:00:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.694 19:00:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.694 19:00:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.694 19:00:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.694 19:00:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.694 19:00:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.694 19:00:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.694 19:00:39 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.694 19:00:39 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:59.694 19:00:39 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.694 00:05:59.694 real 0m1.405s 00:05:59.694 user 0m1.256s 00:05:59.694 sys 0m0.162s 00:05:59.694 19:00:39 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.694 19:00:39 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:59.694 ************************************ 00:05:59.694 END TEST accel_crc32c 00:05:59.694 ************************************ 00:05:59.694 19:00:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:59.694 19:00:39 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:59.694 19:00:39 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:59.694 19:00:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.694 19:00:39 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.694 ************************************ 00:05:59.694 START TEST accel_crc32c_C2 00:05:59.694 ************************************ 00:05:59.694 19:00:39 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:59.694 19:00:39 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:59.694 19:00:39 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:59.694 19:00:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.694 19:00:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.694 19:00:39 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:59.694 19:00:39 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:59.694 19:00:39 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:59.694 19:00:39 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.694 19:00:39 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.694 19:00:39 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.694 19:00:39 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.694 19:00:39 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.694 19:00:39 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:59.694 19:00:39 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:59.694 [2024-07-15 19:00:39.861514] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:59.694 [2024-07-15 19:00:39.861597] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid664101 ] 00:05:59.694 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.694 [2024-07-15 19:00:39.947380] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.694 [2024-07-15 19:00:40.035969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.694 19:00:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:01.069 19:00:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:01.069 19:00:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.069 19:00:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:01.069 19:00:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:01.069 19:00:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:01.069 19:00:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.069 19:00:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:01.069 19:00:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:01.069 19:00:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:01.070 19:00:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.070 19:00:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:01.070 19:00:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:01.070 19:00:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:01.070 19:00:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.070 19:00:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:01.070 19:00:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:01.070 19:00:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:01.070 19:00:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.070 19:00:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:01.070 19:00:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:01.070 19:00:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:01.070 19:00:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.070 19:00:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:01.070 19:00:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:01.070 19:00:41 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:01.070 19:00:41 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:01.070 19:00:41 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.070 00:06:01.070 real 0m1.385s 00:06:01.070 user 0m1.243s 00:06:01.070 sys 0m0.148s 00:06:01.070 19:00:41 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.070 19:00:41 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:01.070 ************************************ 00:06:01.070 END TEST accel_crc32c_C2 00:06:01.070 ************************************ 00:06:01.070 19:00:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:01.070 19:00:41 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:01.070 19:00:41 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:01.070 19:00:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.070 19:00:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.070 ************************************ 00:06:01.070 START TEST accel_copy 00:06:01.070 ************************************ 00:06:01.070 19:00:41 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:01.070 19:00:41 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:01.070 19:00:41 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:01.070 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.070 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.070 19:00:41 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:01.070 19:00:41 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:01.070 19:00:41 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:01.070 19:00:41 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.070 19:00:41 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.070 19:00:41 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.070 19:00:41 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.070 19:00:41 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.070 19:00:41 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:01.070 19:00:41 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:01.070 [2024-07-15 19:00:41.327460] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:01.070 [2024-07-15 19:00:41.327549] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid664313 ] 00:06:01.070 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.070 [2024-07-15 19:00:41.410965] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.070 [2024-07-15 19:00:41.491554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.329 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.330 19:00:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.267 19:00:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:02.267 19:00:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.267 19:00:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.267 19:00:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.267 19:00:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:02.267 19:00:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.267 19:00:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.267 19:00:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.267 19:00:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:02.267 19:00:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.267 19:00:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.267 19:00:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.267 19:00:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:02.267 19:00:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.267 19:00:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.267 19:00:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.267 19:00:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:02.267 19:00:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.267 19:00:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.267 19:00:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.267 19:00:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:02.267 19:00:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.267 19:00:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.267 19:00:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.267 19:00:42 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.267 19:00:42 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:02.267 19:00:42 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.267 00:06:02.267 real 0m1.375s 00:06:02.267 user 0m1.244s 00:06:02.267 sys 0m0.145s 00:06:02.267 19:00:42 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.267 19:00:42 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:02.267 ************************************ 00:06:02.267 END TEST accel_copy 00:06:02.267 ************************************ 00:06:02.525 19:00:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:02.525 19:00:42 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:02.525 19:00:42 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:02.525 19:00:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.525 19:00:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.525 ************************************ 00:06:02.525 START TEST accel_fill 00:06:02.525 ************************************ 00:06:02.525 19:00:42 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:02.525 19:00:42 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:02.525 19:00:42 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:02.525 19:00:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:02.525 19:00:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:02.525 19:00:42 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:02.525 19:00:42 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:02.525 19:00:42 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:02.525 19:00:42 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.525 19:00:42 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.526 19:00:42 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.526 19:00:42 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.526 19:00:42 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.526 19:00:42 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:02.526 19:00:42 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:02.526 [2024-07-15 19:00:42.783819] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:02.526 [2024-07-15 19:00:42.783900] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid664515 ] 00:06:02.526 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.526 [2024-07-15 19:00:42.871752] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.526 [2024-07-15 19:00:42.953885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.785 19:00:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:02.785 19:00:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:02.785 19:00:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:02.785 19:00:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:03.721 19:00:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:03.721 19:00:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:03.721 19:00:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:03.721 19:00:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:03.721 19:00:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:03.721 19:00:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:03.721 19:00:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:03.721 19:00:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:03.721 19:00:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:03.721 19:00:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:03.721 19:00:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:03.721 19:00:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:03.721 19:00:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:03.721 19:00:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:03.721 19:00:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:03.721 19:00:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:03.721 19:00:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:03.721 19:00:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:03.721 19:00:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:03.721 19:00:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:03.722 19:00:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:03.722 19:00:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:03.722 19:00:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:03.722 19:00:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:03.722 19:00:44 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.722 19:00:44 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:03.722 19:00:44 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.722 00:06:03.722 real 0m1.390s 00:06:03.722 user 0m1.246s 00:06:03.722 sys 0m0.156s 00:06:03.722 19:00:44 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.722 19:00:44 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:03.722 ************************************ 00:06:03.722 END TEST accel_fill 00:06:03.722 ************************************ 00:06:03.981 19:00:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:03.981 19:00:44 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:03.981 19:00:44 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:03.981 19:00:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.981 19:00:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.981 ************************************ 00:06:03.981 START TEST accel_copy_crc32c 00:06:03.981 ************************************ 00:06:03.981 19:00:44 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:03.981 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:03.981 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:03.981 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.981 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.981 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:03.981 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:03.981 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:03.981 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.981 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.981 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.981 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.981 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.981 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:03.981 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:03.981 [2024-07-15 19:00:44.250800] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:03.981 [2024-07-15 19:00:44.250888] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid664719 ] 00:06:03.981 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.981 [2024-07-15 19:00:44.334119] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.240 [2024-07-15 19:00:44.417556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.240 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:04.240 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.240 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.240 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.240 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:04.240 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.240 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.240 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.240 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:04.240 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.240 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.240 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.240 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:04.240 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.241 19:00:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.618 19:00:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.618 19:00:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.618 19:00:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.618 19:00:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.618 19:00:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.618 19:00:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.618 19:00:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.618 19:00:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.618 19:00:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.618 19:00:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.618 19:00:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.618 19:00:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.618 19:00:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.618 19:00:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.618 19:00:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.618 19:00:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.618 19:00:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.618 19:00:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.618 19:00:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.618 19:00:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.618 19:00:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.618 19:00:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.618 19:00:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.618 19:00:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.618 19:00:45 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.618 19:00:45 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:05.618 19:00:45 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.618 00:06:05.618 real 0m1.385s 00:06:05.618 user 0m1.242s 00:06:05.618 sys 0m0.157s 00:06:05.618 19:00:45 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.618 19:00:45 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:05.618 ************************************ 00:06:05.618 END TEST accel_copy_crc32c 00:06:05.618 ************************************ 00:06:05.618 19:00:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:05.618 19:00:45 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:05.618 19:00:45 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:05.618 19:00:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.618 19:00:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.618 ************************************ 00:06:05.618 START TEST accel_copy_crc32c_C2 00:06:05.618 ************************************ 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:05.618 [2024-07-15 19:00:45.717279] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:05.618 [2024-07-15 19:00:45.717361] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid664920 ] 00:06:05.618 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.618 [2024-07-15 19:00:45.804870] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.618 [2024-07-15 19:00:45.886815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.618 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.619 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.619 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.619 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.619 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.619 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:05.619 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.619 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.619 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.619 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.619 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.619 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.619 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.619 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.619 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.619 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.619 19:00:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.002 19:00:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.002 19:00:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.002 19:00:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.002 19:00:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.002 19:00:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.002 19:00:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.002 19:00:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.002 19:00:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.002 19:00:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.002 19:00:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.002 19:00:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.002 19:00:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.002 19:00:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.002 19:00:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.002 19:00:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.002 19:00:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.002 19:00:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.002 19:00:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.002 19:00:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.002 19:00:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.002 19:00:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.002 19:00:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.002 19:00:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.002 19:00:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.002 19:00:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.002 19:00:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:07.002 19:00:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.002 00:06:07.002 real 0m1.390s 00:06:07.002 user 0m1.246s 00:06:07.002 sys 0m0.158s 00:06:07.002 19:00:47 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.002 19:00:47 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:07.002 ************************************ 00:06:07.002 END TEST accel_copy_crc32c_C2 00:06:07.002 ************************************ 00:06:07.002 19:00:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.002 19:00:47 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:07.002 19:00:47 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:07.002 19:00:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.002 19:00:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.002 ************************************ 00:06:07.002 START TEST accel_dualcast 00:06:07.002 ************************************ 00:06:07.002 19:00:47 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:07.002 19:00:47 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:07.002 19:00:47 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:07.002 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.002 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.002 19:00:47 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:07.002 19:00:47 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:07.002 19:00:47 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:07.002 19:00:47 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.002 19:00:47 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.002 19:00:47 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:07.003 [2024-07-15 19:00:47.191689] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:07.003 [2024-07-15 19:00:47.191776] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid665123 ] 00:06:07.003 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.003 [2024-07-15 19:00:47.278931] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.003 [2024-07-15 19:00:47.368556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.003 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.266 19:00:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:07.266 19:00:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.266 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.266 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.266 19:00:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:07.266 19:00:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.266 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.266 19:00:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:08.205 19:00:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:08.205 19:00:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:08.205 19:00:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:08.205 19:00:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:08.205 19:00:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:08.205 19:00:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:08.205 19:00:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:08.205 19:00:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:08.205 19:00:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:08.205 19:00:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:08.205 19:00:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:08.205 19:00:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:08.205 19:00:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:08.205 19:00:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:08.205 19:00:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:08.205 19:00:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:08.205 19:00:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:08.205 19:00:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:08.205 19:00:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:08.205 19:00:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:08.205 19:00:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:08.205 19:00:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:08.205 19:00:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:08.205 19:00:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:08.205 19:00:48 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.205 19:00:48 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:08.205 19:00:48 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.205 00:06:08.205 real 0m1.396s 00:06:08.205 user 0m1.242s 00:06:08.205 sys 0m0.167s 00:06:08.205 19:00:48 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.205 19:00:48 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:08.205 ************************************ 00:06:08.205 END TEST accel_dualcast 00:06:08.205 ************************************ 00:06:08.205 19:00:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.205 19:00:48 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:08.205 19:00:48 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:08.205 19:00:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.205 19:00:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.465 ************************************ 00:06:08.465 START TEST accel_compare 00:06:08.465 ************************************ 00:06:08.465 19:00:48 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:08.465 [2024-07-15 19:00:48.670853] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:08.465 [2024-07-15 19:00:48.670929] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid665328 ] 00:06:08.465 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.465 [2024-07-15 19:00:48.757839] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.465 [2024-07-15 19:00:48.837446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:08.465 19:00:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:08.466 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:08.466 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:08.466 19:00:48 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:08.466 19:00:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:08.466 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:08.466 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:08.466 19:00:48 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:08.466 19:00:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:08.466 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:08.466 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:08.466 19:00:48 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.466 19:00:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:08.466 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:08.466 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:08.466 19:00:48 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:08.466 19:00:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:08.466 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:08.466 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:08.466 19:00:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:08.466 19:00:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:08.466 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:08.466 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:08.724 19:00:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:08.724 19:00:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:08.724 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:08.724 19:00:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:09.661 19:00:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:09.661 19:00:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:09.661 19:00:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:09.661 19:00:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:09.661 19:00:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:09.661 19:00:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:09.661 19:00:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:09.661 19:00:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:09.661 19:00:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:09.661 19:00:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:09.661 19:00:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:09.661 19:00:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:09.661 19:00:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:09.661 19:00:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:09.661 19:00:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:09.661 19:00:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:09.661 19:00:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:09.661 19:00:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:09.661 19:00:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:09.661 19:00:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:09.661 19:00:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:09.661 19:00:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:09.661 19:00:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:09.661 19:00:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:09.661 19:00:50 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.661 19:00:50 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:09.661 19:00:50 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.661 00:06:09.661 real 0m1.369s 00:06:09.661 user 0m1.221s 00:06:09.661 sys 0m0.160s 00:06:09.661 19:00:50 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.661 19:00:50 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:09.661 ************************************ 00:06:09.661 END TEST accel_compare 00:06:09.661 ************************************ 00:06:09.661 19:00:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:09.661 19:00:50 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:09.661 19:00:50 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:09.661 19:00:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.661 19:00:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.921 ************************************ 00:06:09.921 START TEST accel_xor 00:06:09.921 ************************************ 00:06:09.921 19:00:50 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:09.921 19:00:50 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:09.921 19:00:50 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:09.921 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:09.921 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:09.921 19:00:50 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:09.921 19:00:50 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:09.921 19:00:50 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:09.921 19:00:50 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.921 19:00:50 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.921 19:00:50 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.921 19:00:50 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.921 19:00:50 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.921 19:00:50 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:09.921 19:00:50 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:09.921 [2024-07-15 19:00:50.126534] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:09.921 [2024-07-15 19:00:50.126622] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid665533 ] 00:06:09.921 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.921 [2024-07-15 19:00:50.214092] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.921 [2024-07-15 19:00:50.306408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.184 19:00:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.185 19:00:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:11.121 19:00:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:11.121 19:00:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:11.121 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:11.121 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:11.121 19:00:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:11.121 19:00:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:11.121 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:11.121 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:11.121 19:00:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:11.121 19:00:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:11.121 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:11.121 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:11.121 19:00:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:11.121 19:00:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:11.122 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:11.122 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:11.122 19:00:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:11.122 19:00:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:11.122 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:11.122 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:11.122 19:00:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:11.122 19:00:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:11.122 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:11.122 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:11.122 19:00:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.122 19:00:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:11.122 19:00:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.122 00:06:11.122 real 0m1.402s 00:06:11.122 user 0m1.257s 00:06:11.122 sys 0m0.158s 00:06:11.122 19:00:51 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.122 19:00:51 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:11.122 ************************************ 00:06:11.122 END TEST accel_xor 00:06:11.122 ************************************ 00:06:11.122 19:00:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:11.122 19:00:51 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:11.122 19:00:51 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:11.122 19:00:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.122 19:00:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.381 ************************************ 00:06:11.381 START TEST accel_xor 00:06:11.381 ************************************ 00:06:11.381 19:00:51 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:11.381 19:00:51 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:11.381 19:00:51 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:11.381 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:11.381 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:11.381 19:00:51 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:11.381 19:00:51 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:11.381 19:00:51 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:11.381 19:00:51 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.381 19:00:51 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.381 19:00:51 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.381 19:00:51 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.381 19:00:51 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.381 19:00:51 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:11.381 19:00:51 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:11.381 [2024-07-15 19:00:51.609010] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:11.381 [2024-07-15 19:00:51.609093] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid665732 ] 00:06:11.381 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.381 [2024-07-15 19:00:51.696306] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.381 [2024-07-15 19:00:51.778669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:11.640 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:11.641 19:00:51 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.641 19:00:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:11.641 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:11.641 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:11.641 19:00:51 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:11.641 19:00:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:11.641 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:11.641 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:11.641 19:00:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:11.641 19:00:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:11.641 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:11.641 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:11.641 19:00:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:11.641 19:00:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:11.641 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:11.641 19:00:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.584 19:00:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:12.584 19:00:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.584 19:00:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.584 19:00:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.584 19:00:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:12.584 19:00:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.584 19:00:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.584 19:00:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.584 19:00:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:12.584 19:00:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.584 19:00:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.584 19:00:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.584 19:00:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:12.584 19:00:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.584 19:00:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.584 19:00:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.584 19:00:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:12.584 19:00:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.584 19:00:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.584 19:00:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.584 19:00:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:12.584 19:00:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.584 19:00:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.584 19:00:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.584 19:00:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.584 19:00:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:12.584 19:00:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.584 00:06:12.584 real 0m1.389s 00:06:12.584 user 0m1.255s 00:06:12.584 sys 0m0.147s 00:06:12.584 19:00:52 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.584 19:00:52 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:12.584 ************************************ 00:06:12.584 END TEST accel_xor 00:06:12.584 ************************************ 00:06:12.845 19:00:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:12.845 19:00:53 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:12.845 19:00:53 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:12.845 19:00:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.845 19:00:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.845 ************************************ 00:06:12.845 START TEST accel_dif_verify 00:06:12.845 ************************************ 00:06:12.845 19:00:53 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:12.845 19:00:53 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:12.845 19:00:53 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:12.845 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:12.845 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:12.845 19:00:53 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:12.845 19:00:53 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:12.845 19:00:53 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:12.845 19:00:53 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.845 19:00:53 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.845 19:00:53 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.845 19:00:53 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.845 19:00:53 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.845 19:00:53 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:12.845 19:00:53 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:12.845 [2024-07-15 19:00:53.084131] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:12.845 [2024-07-15 19:00:53.084215] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid665938 ] 00:06:12.845 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.845 [2024-07-15 19:00:53.169322] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.845 [2024-07-15 19:00:53.253480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.103 19:00:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:13.103 19:00:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.103 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.104 19:00:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.040 19:00:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:14.040 19:00:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:14.040 19:00:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.040 19:00:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.040 19:00:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:14.040 19:00:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:14.040 19:00:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.040 19:00:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.040 19:00:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:14.040 19:00:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:14.040 19:00:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.040 19:00:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.040 19:00:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:14.040 19:00:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:14.040 19:00:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.040 19:00:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.040 19:00:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:14.040 19:00:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:14.040 19:00:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.040 19:00:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.040 19:00:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:14.040 19:00:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:14.040 19:00:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.040 19:00:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.040 19:00:54 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.040 19:00:54 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:14.040 19:00:54 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.040 00:06:14.040 real 0m1.388s 00:06:14.040 user 0m1.250s 00:06:14.040 sys 0m0.154s 00:06:14.040 19:00:54 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.040 19:00:54 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:14.040 ************************************ 00:06:14.040 END TEST accel_dif_verify 00:06:14.040 ************************************ 00:06:14.300 19:00:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.300 19:00:54 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:14.300 19:00:54 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:14.300 19:00:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.300 19:00:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.300 ************************************ 00:06:14.300 START TEST accel_dif_generate 00:06:14.300 ************************************ 00:06:14.300 19:00:54 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:14.300 19:00:54 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:14.300 19:00:54 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:14.300 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:14.300 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:14.300 19:00:54 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:14.300 19:00:54 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:14.300 19:00:54 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:14.300 19:00:54 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.300 19:00:54 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.300 19:00:54 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.300 19:00:54 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.300 19:00:54 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.300 19:00:54 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:14.300 19:00:54 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:14.300 [2024-07-15 19:00:54.556271] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:14.300 [2024-07-15 19:00:54.556357] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid666137 ] 00:06:14.300 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.300 [2024-07-15 19:00:54.643136] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.560 [2024-07-15 19:00:54.739838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:14.560 19:00:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:15.938 19:00:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:15.938 19:00:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:15.938 19:00:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:15.938 19:00:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:15.938 19:00:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:15.938 19:00:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:15.938 19:00:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:15.938 19:00:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:15.938 19:00:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:15.938 19:00:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:15.938 19:00:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:15.938 19:00:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:15.938 19:00:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:15.938 19:00:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:15.938 19:00:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:15.938 19:00:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:15.938 19:00:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:15.938 19:00:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:15.938 19:00:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:15.938 19:00:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:15.938 19:00:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:15.938 19:00:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:15.938 19:00:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:15.938 19:00:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:15.939 19:00:55 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.939 19:00:55 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:15.939 19:00:55 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.939 00:06:15.939 real 0m1.404s 00:06:15.939 user 0m1.261s 00:06:15.939 sys 0m0.159s 00:06:15.939 19:00:55 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.939 19:00:55 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:15.939 ************************************ 00:06:15.939 END TEST accel_dif_generate 00:06:15.939 ************************************ 00:06:15.939 19:00:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:15.939 19:00:55 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:15.939 19:00:55 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:15.939 19:00:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.939 19:00:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.939 ************************************ 00:06:15.939 START TEST accel_dif_generate_copy 00:06:15.939 ************************************ 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:15.939 [2024-07-15 19:00:56.046661] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:15.939 [2024-07-15 19:00:56.046749] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid666339 ] 00:06:15.939 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.939 [2024-07-15 19:00:56.135430] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.939 [2024-07-15 19:00:56.221254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.939 19:00:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.318 19:00:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:17.318 19:00:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.318 19:00:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.318 19:00:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.318 19:00:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:17.318 19:00:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.318 19:00:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.318 19:00:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.318 19:00:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:17.318 19:00:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.318 19:00:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.318 19:00:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.318 19:00:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:17.318 19:00:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.318 19:00:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.318 19:00:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.318 19:00:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:17.318 19:00:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.318 19:00:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.318 19:00:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.318 19:00:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:17.318 19:00:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.318 19:00:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.318 19:00:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.318 19:00:57 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.318 19:00:57 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:17.318 19:00:57 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.318 00:06:17.318 real 0m1.394s 00:06:17.318 user 0m1.249s 00:06:17.318 sys 0m0.160s 00:06:17.318 19:00:57 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.318 19:00:57 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:17.318 ************************************ 00:06:17.318 END TEST accel_dif_generate_copy 00:06:17.318 ************************************ 00:06:17.318 19:00:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.318 19:00:57 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:17.318 19:00:57 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:17.318 19:00:57 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:17.318 19:00:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.318 19:00:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.318 ************************************ 00:06:17.318 START TEST accel_comp 00:06:17.318 ************************************ 00:06:17.318 19:00:57 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:17.318 19:00:57 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:17.318 19:00:57 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:17.318 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.318 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.318 19:00:57 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:17.318 19:00:57 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:17.318 19:00:57 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:17.318 19:00:57 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.318 19:00:57 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.318 19:00:57 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.318 19:00:57 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.318 19:00:57 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.318 19:00:57 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:17.318 19:00:57 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:17.318 [2024-07-15 19:00:57.520657] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:17.318 [2024-07-15 19:00:57.520739] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid666547 ] 00:06:17.318 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.318 [2024-07-15 19:00:57.607107] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.318 [2024-07-15 19:00:57.689642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.318 19:00:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:17.318 19:00:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.318 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.318 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.318 19:00:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:17.319 19:00:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.319 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.319 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.319 19:00:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:17.319 19:00:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.319 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.319 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.319 19:00:57 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:17.319 19:00:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.319 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.319 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.319 19:00:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:17.319 19:00:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.319 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.319 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.319 19:00:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:17.319 19:00:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.319 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.319 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.319 19:00:57 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:17.319 19:00:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.319 19:00:57 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:17.319 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.319 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.319 19:00:57 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.319 19:00:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.319 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.319 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.319 19:00:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.576 19:00:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:18.560 19:00:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:18.560 19:00:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.560 19:00:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:18.560 19:00:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:18.560 19:00:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:18.560 19:00:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.560 19:00:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:18.560 19:00:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:18.560 19:00:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:18.560 19:00:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.560 19:00:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:18.560 19:00:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:18.560 19:00:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:18.560 19:00:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.560 19:00:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:18.560 19:00:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:18.560 19:00:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:18.560 19:00:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.560 19:00:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:18.560 19:00:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:18.560 19:00:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:18.560 19:00:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.560 19:00:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:18.560 19:00:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:18.560 19:00:58 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.560 19:00:58 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:18.560 19:00:58 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.560 00:06:18.560 real 0m1.391s 00:06:18.560 user 0m1.254s 00:06:18.560 sys 0m0.152s 00:06:18.560 19:00:58 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.560 19:00:58 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:18.560 ************************************ 00:06:18.560 END TEST accel_comp 00:06:18.560 ************************************ 00:06:18.560 19:00:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:18.560 19:00:58 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:18.560 19:00:58 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:18.560 19:00:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.560 19:00:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.560 ************************************ 00:06:18.560 START TEST accel_decomp 00:06:18.560 ************************************ 00:06:18.560 19:00:58 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:18.560 19:00:58 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:18.560 19:00:58 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:18.560 19:00:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:18.560 19:00:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:18.560 19:00:58 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:18.560 19:00:58 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:18.560 19:00:58 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:18.560 19:00:58 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.560 19:00:58 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.560 19:00:58 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.560 19:00:58 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.560 19:00:58 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.560 19:00:58 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:18.560 19:00:58 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:18.849 [2024-07-15 19:00:58.991038] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:18.849 [2024-07-15 19:00:58.991117] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid666759 ] 00:06:18.849 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.849 [2024-07-15 19:00:59.068526] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.849 [2024-07-15 19:00:59.153160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:18.849 19:00:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.316 19:01:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:20.316 19:01:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.316 19:01:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:20.316 19:01:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.316 19:01:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:20.316 19:01:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.316 19:01:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:20.316 19:01:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.316 19:01:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:20.316 19:01:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.316 19:01:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:20.316 19:01:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.316 19:01:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:20.316 19:01:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.316 19:01:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:20.316 19:01:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.316 19:01:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:20.316 19:01:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.316 19:01:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:20.316 19:01:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.316 19:01:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:20.316 19:01:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.316 19:01:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:20.316 19:01:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.316 19:01:00 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.316 19:01:00 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:20.316 19:01:00 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.316 00:06:20.316 real 0m1.386s 00:06:20.316 user 0m1.243s 00:06:20.316 sys 0m0.159s 00:06:20.316 19:01:00 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.316 19:01:00 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:20.316 ************************************ 00:06:20.316 END TEST accel_decomp 00:06:20.316 ************************************ 00:06:20.316 19:01:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:20.316 19:01:00 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:20.316 19:01:00 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:20.316 19:01:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.316 19:01:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.316 ************************************ 00:06:20.316 START TEST accel_decomp_full 00:06:20.316 ************************************ 00:06:20.316 19:01:00 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:20.316 19:01:00 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:20.316 19:01:00 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:20.316 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.316 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.316 19:01:00 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:20.316 19:01:00 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:20.316 19:01:00 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:20.316 19:01:00 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.316 19:01:00 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.316 19:01:00 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.316 19:01:00 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.316 19:01:00 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.316 19:01:00 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:20.316 19:01:00 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:20.316 [2024-07-15 19:01:00.457742] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:20.316 [2024-07-15 19:01:00.457826] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid667007 ] 00:06:20.316 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.317 [2024-07-15 19:01:00.544768] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.317 [2024-07-15 19:01:00.627461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.317 19:01:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.695 19:01:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:21.695 19:01:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:21.695 19:01:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:21.695 19:01:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.695 19:01:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:21.695 19:01:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:21.695 19:01:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:21.695 19:01:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.695 19:01:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:21.695 19:01:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:21.695 19:01:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:21.695 19:01:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.695 19:01:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:21.695 19:01:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:21.695 19:01:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:21.695 19:01:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.695 19:01:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:21.695 19:01:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:21.695 19:01:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:21.695 19:01:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.695 19:01:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:21.695 19:01:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:21.695 19:01:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:21.695 19:01:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.695 19:01:01 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.695 19:01:01 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:21.695 19:01:01 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.695 00:06:21.695 real 0m1.402s 00:06:21.695 user 0m1.257s 00:06:21.695 sys 0m0.161s 00:06:21.695 19:01:01 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.695 19:01:01 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:21.695 ************************************ 00:06:21.695 END TEST accel_decomp_full 00:06:21.695 ************************************ 00:06:21.695 19:01:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.695 19:01:01 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:21.695 19:01:01 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:21.695 19:01:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.695 19:01:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.695 ************************************ 00:06:21.695 START TEST accel_decomp_mcore 00:06:21.695 ************************************ 00:06:21.695 19:01:01 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:21.695 19:01:01 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:21.695 19:01:01 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:21.695 19:01:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.695 19:01:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.695 19:01:01 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:21.695 19:01:01 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:21.695 19:01:01 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:21.695 19:01:01 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.695 19:01:01 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.695 19:01:01 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.695 19:01:01 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.695 19:01:01 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.695 19:01:01 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:21.695 19:01:01 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:21.695 [2024-07-15 19:01:01.947633] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:21.695 [2024-07-15 19:01:01.947722] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid667254 ] 00:06:21.695 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.695 [2024-07-15 19:01:02.035714] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:21.695 [2024-07-15 19:01:02.123974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.695 [2024-07-15 19:01:02.124059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.695 [2024-07-15 19:01:02.124158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.695 [2024-07-15 19:01:02.124159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.954 19:01:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:22.888 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:22.888 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:22.888 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:22.888 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:22.888 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:22.888 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:22.888 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:22.888 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:22.888 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:22.888 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:22.888 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:22.888 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:22.888 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:22.888 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:22.888 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:22.888 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:22.888 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:22.888 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:22.888 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:22.888 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.147 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:23.147 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.147 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.147 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.147 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:23.147 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.147 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.147 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.147 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:23.147 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.147 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.147 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.147 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:23.147 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.147 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.147 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.147 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.147 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:23.147 19:01:03 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.147 00:06:23.147 real 0m1.397s 00:06:23.147 user 0m4.598s 00:06:23.147 sys 0m0.171s 00:06:23.147 19:01:03 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.147 19:01:03 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:23.147 ************************************ 00:06:23.147 END TEST accel_decomp_mcore 00:06:23.147 ************************************ 00:06:23.147 19:01:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:23.147 19:01:03 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:23.147 19:01:03 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:23.147 19:01:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.147 19:01:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.147 ************************************ 00:06:23.147 START TEST accel_decomp_full_mcore 00:06:23.147 ************************************ 00:06:23.147 19:01:03 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:23.147 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:23.147 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:23.147 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.147 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:23.147 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.147 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:23.147 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:23.147 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.147 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.147 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.147 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.147 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.147 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:23.147 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:23.147 [2024-07-15 19:01:03.420255] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:23.147 [2024-07-15 19:01:03.420325] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid667530 ] 00:06:23.147 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.147 [2024-07-15 19:01:03.508083] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:23.407 [2024-07-15 19:01:03.594441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.407 [2024-07-15 19:01:03.594543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.407 [2024-07-15 19:01:03.594641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.407 [2024-07-15 19:01:03.594642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.407 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.408 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:23.408 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.408 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.408 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.408 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.408 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.408 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.408 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.408 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:23.408 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.408 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.408 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.408 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:23.408 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.408 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.408 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.408 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:23.408 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.408 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.408 19:01:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.785 00:06:24.785 real 0m1.408s 00:06:24.785 user 0m4.649s 00:06:24.785 sys 0m0.160s 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.785 19:01:04 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:24.785 ************************************ 00:06:24.785 END TEST accel_decomp_full_mcore 00:06:24.785 ************************************ 00:06:24.785 19:01:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.785 19:01:04 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:24.785 19:01:04 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:24.785 19:01:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.785 19:01:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.785 ************************************ 00:06:24.785 START TEST accel_decomp_mthread 00:06:24.785 ************************************ 00:06:24.785 19:01:04 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:24.785 19:01:04 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:24.785 19:01:04 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:24.785 19:01:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:24.785 19:01:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:24.785 19:01:04 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:24.785 19:01:04 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:24.785 19:01:04 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:24.785 19:01:04 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.785 19:01:04 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.785 19:01:04 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.785 19:01:04 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.785 19:01:04 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.785 19:01:04 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:24.785 19:01:04 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:24.785 [2024-07-15 19:01:04.914205] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:24.785 [2024-07-15 19:01:04.914296] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid667734 ] 00:06:24.785 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.785 [2024-07-15 19:01:05.001630] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.785 [2024-07-15 19:01:05.084186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:24.785 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:24.786 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:24.786 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:24.786 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:24.786 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:24.786 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:24.786 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:24.786 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:24.786 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:24.786 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:24.786 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:24.786 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:24.786 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:24.786 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:24.786 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:24.786 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:24.786 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.786 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:24.786 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:24.786 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:24.786 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:24.786 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:24.786 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:24.786 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:24.786 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:24.786 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:24.786 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:24.786 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:24.786 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:24.786 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:24.786 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:24.786 19:01:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.164 00:06:26.164 real 0m1.396s 00:06:26.164 user 0m1.259s 00:06:26.164 sys 0m0.152s 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.164 19:01:06 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:26.164 ************************************ 00:06:26.164 END TEST accel_decomp_mthread 00:06:26.164 ************************************ 00:06:26.165 19:01:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:26.165 19:01:06 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:26.165 19:01:06 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:26.165 19:01:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.165 19:01:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.165 ************************************ 00:06:26.165 START TEST accel_decomp_full_mthread 00:06:26.165 ************************************ 00:06:26.165 19:01:06 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:26.165 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:26.165 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:26.165 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.165 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.165 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:26.165 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:26.165 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:26.165 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.165 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.165 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.165 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.165 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.165 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:26.165 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:26.165 [2024-07-15 19:01:06.388969] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:26.165 [2024-07-15 19:01:06.389052] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid667939 ] 00:06:26.165 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.165 [2024-07-15 19:01:06.475440] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.165 [2024-07-15 19:01:06.556775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:26.437 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.438 19:01:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.373 19:01:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:27.373 19:01:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.373 19:01:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.373 19:01:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.373 19:01:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:27.374 19:01:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.374 19:01:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.374 19:01:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.374 19:01:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:27.374 19:01:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.374 19:01:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.374 19:01:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.374 19:01:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:27.374 19:01:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.374 19:01:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.374 19:01:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.374 19:01:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:27.374 19:01:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.374 19:01:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.374 19:01:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.374 19:01:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:27.374 19:01:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.374 19:01:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.374 19:01:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.374 19:01:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:27.374 19:01:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.374 19:01:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.374 19:01:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.374 19:01:07 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.374 19:01:07 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:27.374 19:01:07 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.374 00:06:27.374 real 0m1.410s 00:06:27.374 user 0m1.265s 00:06:27.374 sys 0m0.159s 00:06:27.374 19:01:07 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.374 19:01:07 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:27.374 ************************************ 00:06:27.374 END TEST accel_decomp_full_mthread 00:06:27.374 ************************************ 00:06:27.631 19:01:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.631 19:01:07 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:27.631 19:01:07 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:27.631 19:01:07 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:27.631 19:01:07 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:27.631 19:01:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.631 19:01:07 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.631 19:01:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.631 19:01:07 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.631 19:01:07 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.631 19:01:07 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.631 19:01:07 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.631 19:01:07 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:27.631 19:01:07 accel -- accel/accel.sh@41 -- # jq -r . 00:06:27.631 ************************************ 00:06:27.632 START TEST accel_dif_functional_tests 00:06:27.632 ************************************ 00:06:27.632 19:01:07 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:27.632 [2024-07-15 19:01:07.877291] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:27.632 [2024-07-15 19:01:07.877345] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid668137 ] 00:06:27.632 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.632 [2024-07-15 19:01:07.958938] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:27.632 [2024-07-15 19:01:08.056678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.632 [2024-07-15 19:01:08.056777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.632 [2024-07-15 19:01:08.056777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.890 00:06:27.890 00:06:27.890 CUnit - A unit testing framework for C - Version 2.1-3 00:06:27.890 http://cunit.sourceforge.net/ 00:06:27.890 00:06:27.890 00:06:27.890 Suite: accel_dif 00:06:27.890 Test: verify: DIF generated, GUARD check ...passed 00:06:27.890 Test: verify: DIF generated, APPTAG check ...passed 00:06:27.890 Test: verify: DIF generated, REFTAG check ...passed 00:06:27.890 Test: verify: DIF not generated, GUARD check ...[2024-07-15 19:01:08.135484] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:27.890 passed 00:06:27.890 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 19:01:08.135540] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:27.890 passed 00:06:27.890 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 19:01:08.135585] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:27.890 passed 00:06:27.890 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:27.890 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 19:01:08.135634] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:27.890 passed 00:06:27.890 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:27.890 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:27.890 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:27.890 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 19:01:08.135732] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:27.890 passed 00:06:27.890 Test: verify copy: DIF generated, GUARD check ...passed 00:06:27.890 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:27.890 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:27.890 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 19:01:08.135854] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:27.890 passed 00:06:27.890 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 19:01:08.135881] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:27.890 passed 00:06:27.891 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 19:01:08.135906] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:27.891 passed 00:06:27.891 Test: generate copy: DIF generated, GUARD check ...passed 00:06:27.891 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:27.891 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:27.891 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:27.891 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:27.891 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:27.891 Test: generate copy: iovecs-len validate ...[2024-07-15 19:01:08.136080] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:27.891 passed 00:06:27.891 Test: generate copy: buffer alignment validate ...passed 00:06:27.891 00:06:27.891 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.891 suites 1 1 n/a 0 0 00:06:27.891 tests 26 26 26 0 0 00:06:27.891 asserts 115 115 115 0 n/a 00:06:27.891 00:06:27.891 Elapsed time = 0.000 seconds 00:06:27.891 00:06:27.891 real 0m0.436s 00:06:27.891 user 0m0.602s 00:06:27.891 sys 0m0.173s 00:06:27.891 19:01:08 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.891 19:01:08 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:27.891 ************************************ 00:06:27.891 END TEST accel_dif_functional_tests 00:06:27.891 ************************************ 00:06:28.149 19:01:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:28.149 00:06:28.149 real 0m32.675s 00:06:28.149 user 0m35.164s 00:06:28.149 sys 0m5.729s 00:06:28.149 19:01:08 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.149 19:01:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.149 ************************************ 00:06:28.149 END TEST accel 00:06:28.149 ************************************ 00:06:28.149 19:01:08 -- common/autotest_common.sh@1142 -- # return 0 00:06:28.149 19:01:08 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:28.149 19:01:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.149 19:01:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.149 19:01:08 -- common/autotest_common.sh@10 -- # set +x 00:06:28.149 ************************************ 00:06:28.149 START TEST accel_rpc 00:06:28.149 ************************************ 00:06:28.149 19:01:08 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:28.149 * Looking for test storage... 00:06:28.149 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:06:28.149 19:01:08 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:28.149 19:01:08 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=668214 00:06:28.149 19:01:08 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 668214 00:06:28.149 19:01:08 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:28.149 19:01:08 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 668214 ']' 00:06:28.149 19:01:08 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.149 19:01:08 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.149 19:01:08 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.149 19:01:08 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.149 19:01:08 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.149 [2024-07-15 19:01:08.568178] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:28.149 [2024-07-15 19:01:08.568264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid668214 ] 00:06:28.408 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.408 [2024-07-15 19:01:08.640735] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.408 [2024-07-15 19:01:08.720731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.344 19:01:09 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.344 19:01:09 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:29.344 19:01:09 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:29.344 19:01:09 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:29.344 19:01:09 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:29.344 19:01:09 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:29.344 19:01:09 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:29.344 19:01:09 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.344 19:01:09 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.344 19:01:09 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.344 ************************************ 00:06:29.344 START TEST accel_assign_opcode 00:06:29.344 ************************************ 00:06:29.344 19:01:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:29.344 19:01:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:29.344 19:01:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.344 19:01:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:29.344 [2024-07-15 19:01:09.462930] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:29.344 19:01:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.344 19:01:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:29.344 19:01:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.344 19:01:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:29.344 [2024-07-15 19:01:09.474949] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:29.344 19:01:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.344 19:01:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:29.344 19:01:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.344 19:01:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:29.344 19:01:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.344 19:01:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:29.344 19:01:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.344 19:01:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:29.344 19:01:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:29.344 19:01:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:29.344 19:01:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.344 software 00:06:29.344 00:06:29.344 real 0m0.265s 00:06:29.344 user 0m0.049s 00:06:29.344 sys 0m0.012s 00:06:29.344 19:01:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.344 19:01:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:29.344 ************************************ 00:06:29.344 END TEST accel_assign_opcode 00:06:29.344 ************************************ 00:06:29.344 19:01:09 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:29.344 19:01:09 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 668214 00:06:29.344 19:01:09 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 668214 ']' 00:06:29.344 19:01:09 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 668214 00:06:29.344 19:01:09 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:29.604 19:01:09 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:29.604 19:01:09 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 668214 00:06:29.604 19:01:09 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:29.604 19:01:09 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:29.604 19:01:09 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 668214' 00:06:29.604 killing process with pid 668214 00:06:29.604 19:01:09 accel_rpc -- common/autotest_common.sh@967 -- # kill 668214 00:06:29.604 19:01:09 accel_rpc -- common/autotest_common.sh@972 -- # wait 668214 00:06:29.864 00:06:29.864 real 0m1.726s 00:06:29.864 user 0m1.759s 00:06:29.864 sys 0m0.520s 00:06:29.864 19:01:10 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.864 19:01:10 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.864 ************************************ 00:06:29.864 END TEST accel_rpc 00:06:29.864 ************************************ 00:06:29.864 19:01:10 -- common/autotest_common.sh@1142 -- # return 0 00:06:29.864 19:01:10 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:29.864 19:01:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.864 19:01:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.864 19:01:10 -- common/autotest_common.sh@10 -- # set +x 00:06:29.864 ************************************ 00:06:29.864 START TEST app_cmdline 00:06:29.864 ************************************ 00:06:29.864 19:01:10 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:30.127 * Looking for test storage... 00:06:30.127 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:30.127 19:01:10 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:30.127 19:01:10 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=668632 00:06:30.127 19:01:10 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 668632 00:06:30.127 19:01:10 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:30.127 19:01:10 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 668632 ']' 00:06:30.127 19:01:10 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.127 19:01:10 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.127 19:01:10 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.127 19:01:10 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.127 19:01:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:30.127 [2024-07-15 19:01:10.386011] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:30.127 [2024-07-15 19:01:10.386097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid668632 ] 00:06:30.127 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.127 [2024-07-15 19:01:10.471721] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.387 [2024-07-15 19:01:10.563409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.955 19:01:11 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.955 19:01:11 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:30.955 19:01:11 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:30.955 { 00:06:30.955 "version": "SPDK v24.09-pre git sha1 a22f117fe", 00:06:30.955 "fields": { 00:06:30.955 "major": 24, 00:06:30.955 "minor": 9, 00:06:30.955 "patch": 0, 00:06:30.955 "suffix": "-pre", 00:06:30.955 "commit": "a22f117fe" 00:06:30.955 } 00:06:30.955 } 00:06:31.214 19:01:11 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:31.214 19:01:11 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:31.214 19:01:11 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:31.214 19:01:11 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:31.214 19:01:11 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:31.214 19:01:11 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.214 19:01:11 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:31.214 19:01:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:31.214 19:01:11 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:31.214 19:01:11 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.214 19:01:11 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:31.214 19:01:11 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:31.214 19:01:11 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:31.214 19:01:11 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:31.214 19:01:11 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:31.214 19:01:11 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:31.214 19:01:11 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:31.214 19:01:11 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:31.214 19:01:11 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:31.214 19:01:11 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:31.214 19:01:11 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:31.214 19:01:11 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:31.214 19:01:11 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:06:31.214 19:01:11 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:31.214 request: 00:06:31.214 { 00:06:31.214 "method": "env_dpdk_get_mem_stats", 00:06:31.214 "req_id": 1 00:06:31.214 } 00:06:31.214 Got JSON-RPC error response 00:06:31.214 response: 00:06:31.214 { 00:06:31.214 "code": -32601, 00:06:31.214 "message": "Method not found" 00:06:31.214 } 00:06:31.214 19:01:11 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:31.214 19:01:11 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:31.214 19:01:11 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:31.214 19:01:11 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:31.214 19:01:11 app_cmdline -- app/cmdline.sh@1 -- # killprocess 668632 00:06:31.214 19:01:11 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 668632 ']' 00:06:31.214 19:01:11 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 668632 00:06:31.214 19:01:11 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:31.214 19:01:11 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:31.214 19:01:11 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 668632 00:06:31.474 19:01:11 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:31.474 19:01:11 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:31.474 19:01:11 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 668632' 00:06:31.474 killing process with pid 668632 00:06:31.474 19:01:11 app_cmdline -- common/autotest_common.sh@967 -- # kill 668632 00:06:31.474 19:01:11 app_cmdline -- common/autotest_common.sh@972 -- # wait 668632 00:06:31.749 00:06:31.749 real 0m1.749s 00:06:31.749 user 0m1.990s 00:06:31.749 sys 0m0.536s 00:06:31.749 19:01:11 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.749 19:01:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:31.749 ************************************ 00:06:31.749 END TEST app_cmdline 00:06:31.749 ************************************ 00:06:31.749 19:01:12 -- common/autotest_common.sh@1142 -- # return 0 00:06:31.749 19:01:12 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:31.749 19:01:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.749 19:01:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.749 19:01:12 -- common/autotest_common.sh@10 -- # set +x 00:06:31.749 ************************************ 00:06:31.749 START TEST version 00:06:31.749 ************************************ 00:06:31.749 19:01:12 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:32.020 * Looking for test storage... 00:06:32.020 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:32.020 19:01:12 version -- app/version.sh@17 -- # get_header_version major 00:06:32.020 19:01:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:32.020 19:01:12 version -- app/version.sh@14 -- # cut -f2 00:06:32.020 19:01:12 version -- app/version.sh@14 -- # tr -d '"' 00:06:32.020 19:01:12 version -- app/version.sh@17 -- # major=24 00:06:32.020 19:01:12 version -- app/version.sh@18 -- # get_header_version minor 00:06:32.020 19:01:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:32.020 19:01:12 version -- app/version.sh@14 -- # cut -f2 00:06:32.020 19:01:12 version -- app/version.sh@14 -- # tr -d '"' 00:06:32.020 19:01:12 version -- app/version.sh@18 -- # minor=9 00:06:32.020 19:01:12 version -- app/version.sh@19 -- # get_header_version patch 00:06:32.020 19:01:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:32.020 19:01:12 version -- app/version.sh@14 -- # cut -f2 00:06:32.020 19:01:12 version -- app/version.sh@14 -- # tr -d '"' 00:06:32.020 19:01:12 version -- app/version.sh@19 -- # patch=0 00:06:32.020 19:01:12 version -- app/version.sh@20 -- # get_header_version suffix 00:06:32.020 19:01:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:32.020 19:01:12 version -- app/version.sh@14 -- # cut -f2 00:06:32.020 19:01:12 version -- app/version.sh@14 -- # tr -d '"' 00:06:32.020 19:01:12 version -- app/version.sh@20 -- # suffix=-pre 00:06:32.020 19:01:12 version -- app/version.sh@22 -- # version=24.9 00:06:32.020 19:01:12 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:32.020 19:01:12 version -- app/version.sh@28 -- # version=24.9rc0 00:06:32.020 19:01:12 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:32.020 19:01:12 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:32.020 19:01:12 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:32.020 19:01:12 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:32.020 00:06:32.020 real 0m0.193s 00:06:32.020 user 0m0.093s 00:06:32.020 sys 0m0.149s 00:06:32.020 19:01:12 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.020 19:01:12 version -- common/autotest_common.sh@10 -- # set +x 00:06:32.020 ************************************ 00:06:32.020 END TEST version 00:06:32.020 ************************************ 00:06:32.020 19:01:12 -- common/autotest_common.sh@1142 -- # return 0 00:06:32.020 19:01:12 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:32.020 19:01:12 -- spdk/autotest.sh@198 -- # uname -s 00:06:32.020 19:01:12 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:32.020 19:01:12 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:32.020 19:01:12 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:32.020 19:01:12 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:32.020 19:01:12 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:32.020 19:01:12 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:32.020 19:01:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:32.020 19:01:12 -- common/autotest_common.sh@10 -- # set +x 00:06:32.020 19:01:12 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:32.020 19:01:12 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:32.020 19:01:12 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:06:32.020 19:01:12 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:06:32.020 19:01:12 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:06:32.020 19:01:12 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:06:32.020 19:01:12 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:06:32.020 19:01:12 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:06:32.020 19:01:12 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:06:32.020 19:01:12 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:06:32.020 19:01:12 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:06:32.020 19:01:12 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:06:32.020 19:01:12 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:06:32.020 19:01:12 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:06:32.020 19:01:12 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:06:32.020 19:01:12 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:06:32.020 19:01:12 -- spdk/autotest.sh@371 -- # [[ 1 -eq 1 ]] 00:06:32.020 19:01:12 -- spdk/autotest.sh@372 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:32.020 19:01:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.020 19:01:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.020 19:01:12 -- common/autotest_common.sh@10 -- # set +x 00:06:32.021 ************************************ 00:06:32.021 START TEST llvm_fuzz 00:06:32.021 ************************************ 00:06:32.021 19:01:12 llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:32.281 * Looking for test storage... 00:06:32.281 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:06:32.281 19:01:12 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:06:32.281 19:01:12 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:06:32.281 19:01:12 llvm_fuzz -- common/autotest_common.sh@546 -- # fuzzers=() 00:06:32.281 19:01:12 llvm_fuzz -- common/autotest_common.sh@546 -- # local fuzzers 00:06:32.281 19:01:12 llvm_fuzz -- common/autotest_common.sh@548 -- # [[ -n '' ]] 00:06:32.281 19:01:12 llvm_fuzz -- common/autotest_common.sh@551 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:06:32.281 19:01:12 llvm_fuzz -- common/autotest_common.sh@552 -- # fuzzers=("${fuzzers[@]##*/}") 00:06:32.281 19:01:12 llvm_fuzz -- common/autotest_common.sh@555 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:06:32.281 19:01:12 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:32.281 19:01:12 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:06:32.281 19:01:12 llvm_fuzz -- fuzz/llvm.sh@56 -- # [[ 1 -eq 0 ]] 00:06:32.281 19:01:12 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:32.281 19:01:12 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:32.281 19:01:12 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:32.281 19:01:12 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:32.281 19:01:12 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:32.281 19:01:12 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:32.281 19:01:12 llvm_fuzz -- fuzz/llvm.sh@62 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:32.281 19:01:12 llvm_fuzz -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.281 19:01:12 llvm_fuzz -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.281 19:01:12 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:06:32.281 ************************************ 00:06:32.281 START TEST nvmf_llvm_fuzz 00:06:32.281 ************************************ 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:32.281 * Looking for test storage... 00:06:32.281 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:32.281 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:32.282 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:32.282 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:32.282 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:32.282 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:32.282 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:32.282 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:32.282 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:32.282 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:32.282 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:32.282 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:32.282 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:32.282 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:06:32.544 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:32.544 #define SPDK_CONFIG_H 00:06:32.544 #define SPDK_CONFIG_APPS 1 00:06:32.545 #define SPDK_CONFIG_ARCH native 00:06:32.545 #undef SPDK_CONFIG_ASAN 00:06:32.545 #undef SPDK_CONFIG_AVAHI 00:06:32.545 #undef SPDK_CONFIG_CET 00:06:32.545 #define SPDK_CONFIG_COVERAGE 1 00:06:32.545 #define SPDK_CONFIG_CROSS_PREFIX 00:06:32.545 #undef SPDK_CONFIG_CRYPTO 00:06:32.545 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:32.545 #undef SPDK_CONFIG_CUSTOMOCF 00:06:32.545 #undef SPDK_CONFIG_DAOS 00:06:32.545 #define SPDK_CONFIG_DAOS_DIR 00:06:32.545 #define SPDK_CONFIG_DEBUG 1 00:06:32.545 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:32.545 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:32.545 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:32.545 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:32.545 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:32.545 #undef SPDK_CONFIG_DPDK_UADK 00:06:32.545 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:32.545 #define SPDK_CONFIG_EXAMPLES 1 00:06:32.545 #undef SPDK_CONFIG_FC 00:06:32.545 #define SPDK_CONFIG_FC_PATH 00:06:32.545 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:32.545 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:32.545 #undef SPDK_CONFIG_FUSE 00:06:32.545 #define SPDK_CONFIG_FUZZER 1 00:06:32.545 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:06:32.545 #undef SPDK_CONFIG_GOLANG 00:06:32.545 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:32.545 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:32.545 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:32.545 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:32.545 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:32.545 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:32.545 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:32.545 #define SPDK_CONFIG_IDXD 1 00:06:32.545 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:32.545 #undef SPDK_CONFIG_IPSEC_MB 00:06:32.545 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:32.545 #define SPDK_CONFIG_ISAL 1 00:06:32.545 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:32.545 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:32.545 #define SPDK_CONFIG_LIBDIR 00:06:32.545 #undef SPDK_CONFIG_LTO 00:06:32.545 #define SPDK_CONFIG_MAX_LCORES 128 00:06:32.545 #define SPDK_CONFIG_NVME_CUSE 1 00:06:32.545 #undef SPDK_CONFIG_OCF 00:06:32.545 #define SPDK_CONFIG_OCF_PATH 00:06:32.545 #define SPDK_CONFIG_OPENSSL_PATH 00:06:32.545 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:32.545 #define SPDK_CONFIG_PGO_DIR 00:06:32.545 #undef SPDK_CONFIG_PGO_USE 00:06:32.545 #define SPDK_CONFIG_PREFIX /usr/local 00:06:32.545 #undef SPDK_CONFIG_RAID5F 00:06:32.545 #undef SPDK_CONFIG_RBD 00:06:32.545 #define SPDK_CONFIG_RDMA 1 00:06:32.545 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:32.545 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:32.545 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:32.545 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:32.545 #undef SPDK_CONFIG_SHARED 00:06:32.545 #undef SPDK_CONFIG_SMA 00:06:32.545 #define SPDK_CONFIG_TESTS 1 00:06:32.545 #undef SPDK_CONFIG_TSAN 00:06:32.545 #define SPDK_CONFIG_UBLK 1 00:06:32.545 #define SPDK_CONFIG_UBSAN 1 00:06:32.545 #undef SPDK_CONFIG_UNIT_TESTS 00:06:32.545 #undef SPDK_CONFIG_URING 00:06:32.545 #define SPDK_CONFIG_URING_PATH 00:06:32.545 #undef SPDK_CONFIG_URING_ZNS 00:06:32.545 #undef SPDK_CONFIG_USDT 00:06:32.545 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:32.545 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:32.545 #define SPDK_CONFIG_VFIO_USER 1 00:06:32.545 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:32.545 #define SPDK_CONFIG_VHOST 1 00:06:32.545 #define SPDK_CONFIG_VIRTIO 1 00:06:32.545 #undef SPDK_CONFIG_VTUNE 00:06:32.545 #define SPDK_CONFIG_VTUNE_DIR 00:06:32.545 #define SPDK_CONFIG_WERROR 1 00:06:32.545 #define SPDK_CONFIG_WPDK_DIR 00:06:32.545 #undef SPDK_CONFIG_XNVME 00:06:32.545 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:32.545 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 1 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : true 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # : 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:32.546 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # cat 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j72 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@318 -- # [[ -z 668990 ]] 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@318 -- # kill -0 668990 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:32.547 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.s30pT1 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.s30pT1/tests/nvmf /tmp/spdk.s30pT1 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@327 -- # df -T 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=945618944 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4338810880 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=50214744064 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742551040 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=11527806976 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30866563072 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871273472 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=12342710272 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348510208 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=5799936 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30870695936 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871277568 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=581632 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=6174248960 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174253056 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:32.548 * Looking for test storage... 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # target_space=50214744064 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@381 -- # new_size=13742399488 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:32.548 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # return 0 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:32.548 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1687 -- # true 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:32.549 19:01:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:06:32.549 [2024-07-15 19:01:12.927750] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:32.549 [2024-07-15 19:01:12.927834] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid669091 ] 00:06:32.549 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.808 [2024-07-15 19:01:13.221431] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.067 [2024-07-15 19:01:13.309464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.067 [2024-07-15 19:01:13.368777] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.067 [2024-07-15 19:01:13.385084] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:06:33.067 INFO: Running with entropic power schedule (0xFF, 100). 00:06:33.067 INFO: Seed: 2502937531 00:06:33.067 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:33.067 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:33.067 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:33.067 INFO: A corpus is not provided, starting from an empty corpus 00:06:33.067 #2 INITED exec/s: 0 rss: 65Mb 00:06:33.067 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:33.067 This may also happen if the target rejected all inputs we tried so far 00:06:33.067 [2024-07-15 19:01:13.450290] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.067 [2024-07-15 19:01:13.450321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.635 NEW_FUNC[1/695]: 0x483e80 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:06:33.635 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:33.635 #29 NEW cov: 11846 ft: 11845 corp: 2/68b lim: 320 exec/s: 0 rss: 71Mb L: 67/67 MS: 2 CrossOver-InsertRepeatedBytes- 00:06:33.635 [2024-07-15 19:01:13.791309] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000a0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.635 [2024-07-15 19:01:13.791371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.635 #30 NEW cov: 11976 ft: 12537 corp: 3/136b lim: 320 exec/s: 0 rss: 72Mb L: 68/68 MS: 1 CrossOver- 00:06:33.635 [2024-07-15 19:01:13.851224] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.635 [2024-07-15 19:01:13.851251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.635 #31 NEW cov: 11982 ft: 12819 corp: 4/203b lim: 320 exec/s: 0 rss: 72Mb L: 67/68 MS: 1 ChangeBinInt- 00:06:33.635 [2024-07-15 19:01:13.891323] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.635 [2024-07-15 19:01:13.891352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.635 #32 NEW cov: 12067 ft: 13080 corp: 5/270b lim: 320 exec/s: 0 rss: 72Mb L: 67/68 MS: 1 ChangeByte- 00:06:33.635 [2024-07-15 19:01:13.931475] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.635 [2024-07-15 19:01:13.931500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.635 #33 NEW cov: 12067 ft: 13124 corp: 6/338b lim: 320 exec/s: 0 rss: 72Mb L: 68/68 MS: 1 CopyPart- 00:06:33.635 [2024-07-15 19:01:13.981546] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.635 [2024-07-15 19:01:13.981570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.635 #34 NEW cov: 12067 ft: 13183 corp: 7/405b lim: 320 exec/s: 0 rss: 72Mb L: 67/68 MS: 1 ChangeBinInt- 00:06:33.635 [2024-07-15 19:01:14.021792] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000a0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.635 [2024-07-15 19:01:14.021816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.635 [2024-07-15 19:01:14.021863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.635 [2024-07-15 19:01:14.021877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.635 #35 NEW cov: 12090 ft: 13419 corp: 8/540b lim: 320 exec/s: 0 rss: 72Mb L: 135/135 MS: 1 CrossOver- 00:06:33.635 [2024-07-15 19:01:14.061847] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.635 [2024-07-15 19:01:14.061873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.894 #36 NEW cov: 12090 ft: 13496 corp: 9/607b lim: 320 exec/s: 0 rss: 72Mb L: 67/135 MS: 1 ShuffleBytes- 00:06:33.894 [2024-07-15 19:01:14.112029] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000a0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.894 [2024-07-15 19:01:14.112054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.894 [2024-07-15 19:01:14.112104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.894 [2024-07-15 19:01:14.112118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.894 #37 NEW cov: 12090 ft: 13545 corp: 10/742b lim: 320 exec/s: 0 rss: 72Mb L: 135/135 MS: 1 ChangeBit- 00:06:33.894 [2024-07-15 19:01:14.162069] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.894 [2024-07-15 19:01:14.162097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.894 #38 NEW cov: 12090 ft: 13603 corp: 11/809b lim: 320 exec/s: 0 rss: 72Mb L: 67/135 MS: 1 CopyPart- 00:06:33.895 [2024-07-15 19:01:14.212221] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.895 [2024-07-15 19:01:14.212250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.895 #44 NEW cov: 12090 ft: 13664 corp: 12/876b lim: 320 exec/s: 0 rss: 72Mb L: 67/135 MS: 1 ChangeBinInt- 00:06:33.895 [2024-07-15 19:01:14.262360] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.895 [2024-07-15 19:01:14.262386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.895 #45 NEW cov: 12090 ft: 13693 corp: 13/952b lim: 320 exec/s: 0 rss: 72Mb L: 76/135 MS: 1 CMP- DE: "\376\354\307\256\2414\235\325"- 00:06:33.895 [2024-07-15 19:01:14.312529] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.895 [2024-07-15 19:01:14.312556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.153 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:34.153 #46 NEW cov: 12113 ft: 13784 corp: 14/1019b lim: 320 exec/s: 0 rss: 73Mb L: 67/135 MS: 1 ChangeBit- 00:06:34.153 [2024-07-15 19:01:14.352565] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.153 [2024-07-15 19:01:14.352591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.153 #47 NEW cov: 12113 ft: 13807 corp: 15/1141b lim: 320 exec/s: 0 rss: 73Mb L: 122/135 MS: 1 CopyPart- 00:06:34.153 [2024-07-15 19:01:14.392682] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.153 [2024-07-15 19:01:14.392708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.153 #48 NEW cov: 12113 ft: 13823 corp: 16/1209b lim: 320 exec/s: 0 rss: 73Mb L: 68/135 MS: 1 InsertByte- 00:06:34.153 [2024-07-15 19:01:14.442862] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffff0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.153 [2024-07-15 19:01:14.442888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.153 #54 NEW cov: 12113 ft: 13899 corp: 17/1276b lim: 320 exec/s: 54 rss: 73Mb L: 67/135 MS: 1 ChangeBinInt- 00:06:34.153 [2024-07-15 19:01:14.482908] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.153 [2024-07-15 19:01:14.482934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.153 #55 NEW cov: 12113 ft: 13911 corp: 18/1384b lim: 320 exec/s: 55 rss: 73Mb L: 108/135 MS: 1 CopyPart- 00:06:34.153 [2024-07-15 19:01:14.523044] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.153 [2024-07-15 19:01:14.523071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.153 #56 NEW cov: 12113 ft: 13924 corp: 19/1451b lim: 320 exec/s: 56 rss: 73Mb L: 67/135 MS: 1 ChangeBinInt- 00:06:34.153 [2024-07-15 19:01:14.563159] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.153 [2024-07-15 19:01:14.563184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.412 #57 NEW cov: 12113 ft: 13976 corp: 20/1573b lim: 320 exec/s: 57 rss: 73Mb L: 122/135 MS: 1 CopyPart- 00:06:34.412 [2024-07-15 19:01:14.613302] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.412 [2024-07-15 19:01:14.613327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.412 [2024-07-15 19:01:14.643368] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.412 [2024-07-15 19:01:14.643393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.412 #59 NEW cov: 12113 ft: 13985 corp: 21/1640b lim: 320 exec/s: 59 rss: 73Mb L: 67/135 MS: 2 ChangeBinInt-CMP- DE: "\003\000"- 00:06:34.412 [2024-07-15 19:01:14.683484] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.412 [2024-07-15 19:01:14.683513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.412 #60 NEW cov: 12113 ft: 14010 corp: 22/1715b lim: 320 exec/s: 60 rss: 73Mb L: 75/135 MS: 1 CMP- DE: "\000\000\000\000\0023x\251"- 00:06:34.412 [2024-07-15 19:01:14.733669] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00006000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.412 [2024-07-15 19:01:14.733694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.412 #61 NEW cov: 12113 ft: 14028 corp: 23/1791b lim: 320 exec/s: 61 rss: 73Mb L: 76/135 MS: 1 ChangeByte- 00:06:34.412 [2024-07-15 19:01:14.783813] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.412 [2024-07-15 19:01:14.783837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.412 #62 NEW cov: 12113 ft: 14051 corp: 24/1860b lim: 320 exec/s: 62 rss: 73Mb L: 69/135 MS: 1 CrossOver- 00:06:34.412 [2024-07-15 19:01:14.833922] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.412 [2024-07-15 19:01:14.833947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.671 #63 NEW cov: 12113 ft: 14062 corp: 25/1960b lim: 320 exec/s: 63 rss: 73Mb L: 100/135 MS: 1 CrossOver- 00:06:34.671 [2024-07-15 19:01:14.874027] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.671 [2024-07-15 19:01:14.874052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.671 #64 NEW cov: 12113 ft: 14119 corp: 26/2027b lim: 320 exec/s: 64 rss: 73Mb L: 67/135 MS: 1 PersAutoDict- DE: "\376\354\307\256\2414\235\325"- 00:06:34.671 [2024-07-15 19:01:14.914149] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00006000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.671 [2024-07-15 19:01:14.914173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.671 #65 NEW cov: 12113 ft: 14137 corp: 27/2103b lim: 320 exec/s: 65 rss: 73Mb L: 76/135 MS: 1 PersAutoDict- DE: "\000\000\000\000\0023x\251"- 00:06:34.671 [2024-07-15 19:01:14.964286] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.671 [2024-07-15 19:01:14.964310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.671 #66 NEW cov: 12113 ft: 14161 corp: 28/2170b lim: 320 exec/s: 66 rss: 73Mb L: 67/135 MS: 1 PersAutoDict- DE: "\000\000\000\000\0023x\251"- 00:06:34.671 [2024-07-15 19:01:15.004401] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.671 [2024-07-15 19:01:15.004425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.671 #67 NEW cov: 12113 ft: 14191 corp: 29/2244b lim: 320 exec/s: 67 rss: 73Mb L: 74/135 MS: 1 CopyPart- 00:06:34.671 [2024-07-15 19:01:15.044524] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.671 [2024-07-15 19:01:15.044548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.671 #68 NEW cov: 12113 ft: 14212 corp: 30/2319b lim: 320 exec/s: 68 rss: 74Mb L: 75/135 MS: 1 PersAutoDict- DE: "\000\000\000\000\0023x\251"- 00:06:34.671 [2024-07-15 19:01:15.084565] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.671 [2024-07-15 19:01:15.084589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.931 #69 NEW cov: 12113 ft: 14228 corp: 31/2386b lim: 320 exec/s: 69 rss: 74Mb L: 67/135 MS: 1 CopyPart- 00:06:34.931 [2024-07-15 19:01:15.134703] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.931 [2024-07-15 19:01:15.134728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.931 #70 NEW cov: 12113 ft: 14233 corp: 32/2462b lim: 320 exec/s: 70 rss: 74Mb L: 76/135 MS: 1 ChangeBinInt- 00:06:34.931 [2024-07-15 19:01:15.174815] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xa00000000000000 00:06:34.931 [2024-07-15 19:01:15.174839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.931 #71 NEW cov: 12113 ft: 14248 corp: 33/2529b lim: 320 exec/s: 71 rss: 74Mb L: 67/135 MS: 1 ChangeBinInt- 00:06:34.931 [2024-07-15 19:01:15.214942] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xa00000000000000 00:06:34.931 [2024-07-15 19:01:15.214967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.931 #72 NEW cov: 12113 ft: 14249 corp: 34/2615b lim: 320 exec/s: 72 rss: 74Mb L: 86/135 MS: 1 CrossOver- 00:06:34.931 [2024-07-15 19:01:15.265076] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffff0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.931 [2024-07-15 19:01:15.265101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.931 #73 NEW cov: 12113 ft: 14263 corp: 35/2683b lim: 320 exec/s: 73 rss: 74Mb L: 68/135 MS: 1 InsertByte- 00:06:34.931 [2024-07-15 19:01:15.315234] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.931 [2024-07-15 19:01:15.315259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.931 #74 NEW cov: 12113 ft: 14281 corp: 36/2750b lim: 320 exec/s: 74 rss: 74Mb L: 67/135 MS: 1 CMP- DE: "\234\003\372\364Q8\023\000"- 00:06:35.190 [2024-07-15 19:01:15.365373] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.190 [2024-07-15 19:01:15.365398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.190 #75 NEW cov: 12113 ft: 14300 corp: 37/2825b lim: 320 exec/s: 75 rss: 75Mb L: 75/135 MS: 1 ChangeBinInt- 00:06:35.190 [2024-07-15 19:01:15.415489] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.190 [2024-07-15 19:01:15.415516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.190 #76 NEW cov: 12113 ft: 14308 corp: 38/2892b lim: 320 exec/s: 38 rss: 75Mb L: 67/135 MS: 1 ShuffleBytes- 00:06:35.190 #76 DONE cov: 12113 ft: 14308 corp: 38/2892b lim: 320 exec/s: 38 rss: 75Mb 00:06:35.190 ###### Recommended dictionary. ###### 00:06:35.190 "\376\354\307\256\2414\235\325" # Uses: 1 00:06:35.190 "\003\000" # Uses: 0 00:06:35.190 "\000\000\000\000\0023x\251" # Uses: 3 00:06:35.190 "\234\003\372\364Q8\023\000" # Uses: 0 00:06:35.190 ###### End of recommended dictionary. ###### 00:06:35.190 Done 76 runs in 2 second(s) 00:06:35.190 19:01:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:06:35.190 19:01:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:35.190 19:01:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:35.190 19:01:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:06:35.190 19:01:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:06:35.190 19:01:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:35.190 19:01:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:35.190 19:01:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:35.190 19:01:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:06:35.190 19:01:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:35.190 19:01:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:35.190 19:01:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:06:35.190 19:01:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:06:35.190 19:01:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:35.190 19:01:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:06:35.190 19:01:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:35.190 19:01:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:35.190 19:01:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:35.190 19:01:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:06:35.449 [2024-07-15 19:01:15.625754] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:35.449 [2024-07-15 19:01:15.625822] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid669434 ] 00:06:35.449 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.449 [2024-07-15 19:01:15.833896] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.708 [2024-07-15 19:01:15.905886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.708 [2024-07-15 19:01:15.965341] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:35.708 [2024-07-15 19:01:15.981639] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:06:35.708 INFO: Running with entropic power schedule (0xFF, 100). 00:06:35.708 INFO: Seed: 804005454 00:06:35.708 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:35.708 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:35.708 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:35.708 INFO: A corpus is not provided, starting from an empty corpus 00:06:35.708 #2 INITED exec/s: 0 rss: 65Mb 00:06:35.708 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:35.708 This may also happen if the target rejected all inputs we tried so far 00:06:35.708 [2024-07-15 19:01:16.046750] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:35.708 [2024-07-15 19:01:16.046882] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:35.708 [2024-07-15 19:01:16.046984] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:35.708 [2024-07-15 19:01:16.047085] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:35.708 [2024-07-15 19:01:16.047301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.708 [2024-07-15 19:01:16.047333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.708 [2024-07-15 19:01:16.047388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.708 [2024-07-15 19:01:16.047406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.708 [2024-07-15 19:01:16.047458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.708 [2024-07-15 19:01:16.047472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.708 [2024-07-15 19:01:16.047524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.708 [2024-07-15 19:01:16.047537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.966 NEW_FUNC[1/696]: 0x484780 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:06:35.966 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:35.966 #6 NEW cov: 11929 ft: 11925 corp: 2/27b lim: 30 exec/s: 0 rss: 72Mb L: 26/26 MS: 4 ChangeByte-ChangeBit-CopyPart-InsertRepeatedBytes- 00:06:35.966 [2024-07-15 19:01:16.387860] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:35.966 [2024-07-15 19:01:16.388007] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:35.966 [2024-07-15 19:01:16.388122] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:35.966 [2024-07-15 19:01:16.388241] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:35.966 [2024-07-15 19:01:16.388509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.966 [2024-07-15 19:01:16.388575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.966 [2024-07-15 19:01:16.388669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.966 [2024-07-15 19:01:16.388701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.966 [2024-07-15 19:01:16.388787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.966 [2024-07-15 19:01:16.388819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.966 [2024-07-15 19:01:16.388907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.966 [2024-07-15 19:01:16.388937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.224 #13 NEW cov: 12059 ft: 12672 corp: 3/53b lim: 30 exec/s: 0 rss: 72Mb L: 26/26 MS: 2 ChangeBinInt-CrossOver- 00:06:36.224 [2024-07-15 19:01:16.437671] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.224 [2024-07-15 19:01:16.437786] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.224 [2024-07-15 19:01:16.437887] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.224 [2024-07-15 19:01:16.438004] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.224 [2024-07-15 19:01:16.438221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.224 [2024-07-15 19:01:16.438247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.224 [2024-07-15 19:01:16.438306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.224 [2024-07-15 19:01:16.438320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.224 [2024-07-15 19:01:16.438373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.224 [2024-07-15 19:01:16.438387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.224 [2024-07-15 19:01:16.438438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.224 [2024-07-15 19:01:16.438452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.224 #14 NEW cov: 12065 ft: 12938 corp: 4/79b lim: 30 exec/s: 0 rss: 72Mb L: 26/26 MS: 1 ShuffleBytes- 00:06:36.224 [2024-07-15 19:01:16.487736] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:36.224 [2024-07-15 19:01:16.487866] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:36.224 [2024-07-15 19:01:16.487969] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:36.224 [2024-07-15 19:01:16.488170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.224 [2024-07-15 19:01:16.488196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.224 [2024-07-15 19:01:16.488252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.224 [2024-07-15 19:01:16.488266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.224 [2024-07-15 19:01:16.488327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.224 [2024-07-15 19:01:16.488341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.224 #17 NEW cov: 12150 ft: 13779 corp: 5/100b lim: 30 exec/s: 0 rss: 72Mb L: 21/26 MS: 3 CMP-ChangeByte-InsertRepeatedBytes- DE: "\001\000\000\000"- 00:06:36.224 [2024-07-15 19:01:16.527883] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.224 [2024-07-15 19:01:16.528000] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.224 [2024-07-15 19:01:16.528108] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.224 [2024-07-15 19:01:16.528207] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.224 [2024-07-15 19:01:16.528432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.224 [2024-07-15 19:01:16.528458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.224 [2024-07-15 19:01:16.528512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.224 [2024-07-15 19:01:16.528526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.224 [2024-07-15 19:01:16.528577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:b5b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.224 [2024-07-15 19:01:16.528591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.224 [2024-07-15 19:01:16.528646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.224 [2024-07-15 19:01:16.528659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.224 #18 NEW cov: 12150 ft: 13903 corp: 6/126b lim: 30 exec/s: 0 rss: 72Mb L: 26/26 MS: 1 ChangeBit- 00:06:36.224 [2024-07-15 19:01:16.578001] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:36.224 [2024-07-15 19:01:16.578114] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:36.224 [2024-07-15 19:01:16.578226] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:36.224 [2024-07-15 19:01:16.578429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.224 [2024-07-15 19:01:16.578456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.224 [2024-07-15 19:01:16.578511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.224 [2024-07-15 19:01:16.578524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.225 [2024-07-15 19:01:16.578576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.225 [2024-07-15 19:01:16.578591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.225 #19 NEW cov: 12150 ft: 14004 corp: 7/147b lim: 30 exec/s: 0 rss: 72Mb L: 21/26 MS: 1 CopyPart- 00:06:36.225 [2024-07-15 19:01:16.628166] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:36.225 [2024-07-15 19:01:16.628306] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff01 00:06:36.225 [2024-07-15 19:01:16.628418] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:36.225 [2024-07-15 19:01:16.628526] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:06:36.225 [2024-07-15 19:01:16.628734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.225 [2024-07-15 19:01:16.628760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.225 [2024-07-15 19:01:16.628814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.225 [2024-07-15 19:01:16.628829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.225 [2024-07-15 19:01:16.628883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.225 [2024-07-15 19:01:16.628896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.225 [2024-07-15 19:01:16.628947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.225 [2024-07-15 19:01:16.628961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.225 #20 NEW cov: 12173 ft: 14072 corp: 8/172b lim: 30 exec/s: 0 rss: 72Mb L: 25/26 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:06:36.483 [2024-07-15 19:01:16.668213] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.483 [2024-07-15 19:01:16.668355] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.483 [2024-07-15 19:01:16.668558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6fb181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.483 [2024-07-15 19:01:16.668585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.483 [2024-07-15 19:01:16.668639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.483 [2024-07-15 19:01:16.668653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.483 #24 NEW cov: 12173 ft: 14427 corp: 9/186b lim: 30 exec/s: 0 rss: 72Mb L: 14/26 MS: 4 ChangeBit-ChangeByte-ChangeByte-CrossOver- 00:06:36.483 [2024-07-15 19:01:16.708389] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.483 [2024-07-15 19:01:16.708503] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.483 [2024-07-15 19:01:16.708609] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.483 [2024-07-15 19:01:16.708711] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.483 [2024-07-15 19:01:16.708911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.483 [2024-07-15 19:01:16.708936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.483 [2024-07-15 19:01:16.708989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.483 [2024-07-15 19:01:16.709004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.483 [2024-07-15 19:01:16.709054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.483 [2024-07-15 19:01:16.709067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.483 [2024-07-15 19:01:16.709119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3ab181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.483 [2024-07-15 19:01:16.709132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.483 #25 NEW cov: 12173 ft: 14483 corp: 10/213b lim: 30 exec/s: 0 rss: 72Mb L: 27/27 MS: 1 InsertByte- 00:06:36.483 [2024-07-15 19:01:16.748501] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.483 [2024-07-15 19:01:16.748618] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.483 [2024-07-15 19:01:16.748723] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.483 [2024-07-15 19:01:16.748827] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.483 [2024-07-15 19:01:16.749028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.483 [2024-07-15 19:01:16.749053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.483 [2024-07-15 19:01:16.749106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:21b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.483 [2024-07-15 19:01:16.749120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.483 [2024-07-15 19:01:16.749172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:b1b581b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.483 [2024-07-15 19:01:16.749188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.483 [2024-07-15 19:01:16.749249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.483 [2024-07-15 19:01:16.749263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.483 #26 NEW cov: 12173 ft: 14535 corp: 11/240b lim: 30 exec/s: 0 rss: 72Mb L: 27/27 MS: 1 InsertByte- 00:06:36.483 [2024-07-15 19:01:16.798589] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:36.483 [2024-07-15 19:01:16.798723] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:36.483 [2024-07-15 19:01:16.798925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.483 [2024-07-15 19:01:16.798950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.483 [2024-07-15 19:01:16.799005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.483 [2024-07-15 19:01:16.799019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.483 #28 NEW cov: 12173 ft: 14566 corp: 12/252b lim: 30 exec/s: 0 rss: 72Mb L: 12/27 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:06:36.483 [2024-07-15 19:01:16.838780] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.483 [2024-07-15 19:01:16.838912] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.483 [2024-07-15 19:01:16.839020] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.483 [2024-07-15 19:01:16.839120] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.483 [2024-07-15 19:01:16.839328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.484 [2024-07-15 19:01:16.839354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.484 [2024-07-15 19:01:16.839408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:21b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.484 [2024-07-15 19:01:16.839423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.484 [2024-07-15 19:01:16.839474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:b1b581b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.484 [2024-07-15 19:01:16.839488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.484 [2024-07-15 19:01:16.839541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.484 [2024-07-15 19:01:16.839554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.484 #29 NEW cov: 12173 ft: 14600 corp: 13/279b lim: 30 exec/s: 0 rss: 72Mb L: 27/27 MS: 1 ShuffleBytes- 00:06:36.484 [2024-07-15 19:01:16.888889] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.484 [2024-07-15 19:01:16.889018] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.484 [2024-07-15 19:01:16.889123] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (181960) > buf size (4096) 00:06:36.484 [2024-07-15 19:01:16.889232] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.484 [2024-07-15 19:01:16.889458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.484 [2024-07-15 19:01:16.889484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.484 [2024-07-15 19:01:16.889534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.484 [2024-07-15 19:01:16.889548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.484 [2024-07-15 19:01:16.889600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:b1b10001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.484 [2024-07-15 19:01:16.889614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.484 [2024-07-15 19:01:16.889665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.484 [2024-07-15 19:01:16.889678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.742 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:36.742 #30 NEW cov: 12196 ft: 14651 corp: 14/305b lim: 30 exec/s: 0 rss: 73Mb L: 26/27 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:06:36.742 [2024-07-15 19:01:16.949060] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.742 [2024-07-15 19:01:16.949192] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.742 [2024-07-15 19:01:16.949307] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.742 [2024-07-15 19:01:16.949421] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.742 [2024-07-15 19:01:16.949638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01b1817a cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.742 [2024-07-15 19:01:16.949663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.743 [2024-07-15 19:01:16.949717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:21b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.743 [2024-07-15 19:01:16.949732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.743 [2024-07-15 19:01:16.949782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:b1b581b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.743 [2024-07-15 19:01:16.949795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.743 [2024-07-15 19:01:16.949848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.743 [2024-07-15 19:01:16.949861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.743 #31 NEW cov: 12196 ft: 14761 corp: 15/332b lim: 30 exec/s: 0 rss: 73Mb L: 27/27 MS: 1 ChangeByte- 00:06:36.743 [2024-07-15 19:01:16.989203] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.743 [2024-07-15 19:01:16.989343] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.743 [2024-07-15 19:01:16.989447] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.743 [2024-07-15 19:01:16.989554] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.743 [2024-07-15 19:01:16.989761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.743 [2024-07-15 19:01:16.989791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.743 [2024-07-15 19:01:16.989843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:21b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.743 [2024-07-15 19:01:16.989857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.743 [2024-07-15 19:01:16.989911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:b1b581b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.743 [2024-07-15 19:01:16.989924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.743 [2024-07-15 19:01:16.989975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.743 [2024-07-15 19:01:16.989988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.743 #32 NEW cov: 12196 ft: 14811 corp: 16/359b lim: 30 exec/s: 0 rss: 73Mb L: 27/27 MS: 1 ShuffleBytes- 00:06:36.743 [2024-07-15 19:01:17.029320] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.743 [2024-07-15 19:01:17.029436] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.743 [2024-07-15 19:01:17.029546] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000de52 00:06:36.743 [2024-07-15 19:01:17.029649] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:36.743 [2024-07-15 19:01:17.029869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.743 [2024-07-15 19:01:17.029895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.743 [2024-07-15 19:01:17.029948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.743 [2024-07-15 19:01:17.029962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.743 [2024-07-15 19:01:17.030013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:b1fb836c cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.743 [2024-07-15 19:01:17.030027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.743 [2024-07-15 19:01:17.030080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:38138100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.743 [2024-07-15 19:01:17.030094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.743 #33 NEW cov: 12196 ft: 14847 corp: 17/385b lim: 30 exec/s: 33 rss: 73Mb L: 26/27 MS: 1 CMP- DE: "\373lS\336R8\023\000"- 00:06:36.743 [2024-07-15 19:01:17.079432] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (263172) > buf size (4096) 00:06:36.743 [2024-07-15 19:01:17.079565] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:36.743 [2024-07-15 19:01:17.079674] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:36.743 [2024-07-15 19:01:17.079876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.743 [2024-07-15 19:01:17.079901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.743 [2024-07-15 19:01:17.079956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.743 [2024-07-15 19:01:17.079973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.743 [2024-07-15 19:01:17.080025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.743 [2024-07-15 19:01:17.080038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.743 #34 NEW cov: 12196 ft: 14852 corp: 18/406b lim: 30 exec/s: 34 rss: 73Mb L: 21/27 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:06:36.743 [2024-07-15 19:01:17.129503] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:36.743 [2024-07-15 19:01:17.129723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.743 [2024-07-15 19:01:17.129749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.743 #35 NEW cov: 12196 ft: 15235 corp: 19/415b lim: 30 exec/s: 35 rss: 73Mb L: 9/27 MS: 1 CrossOver- 00:06:37.001 [2024-07-15 19:01:17.179768] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.001 [2024-07-15 19:01:17.179888] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.001 [2024-07-15 19:01:17.180000] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1fb 00:06:37.001 [2024-07-15 19:01:17.180107] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003813 00:06:37.001 [2024-07-15 19:01:17.180336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.001 [2024-07-15 19:01:17.180373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.001 [2024-07-15 19:01:17.180425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:21b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.001 [2024-07-15 19:01:17.180439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.001 [2024-07-15 19:01:17.180490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:b1b581b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.001 [2024-07-15 19:01:17.180504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.001 [2024-07-15 19:01:17.180555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:6c5302de cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.001 [2024-07-15 19:01:17.180568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.001 #36 NEW cov: 12196 ft: 15248 corp: 20/442b lim: 30 exec/s: 36 rss: 73Mb L: 27/27 MS: 1 PersAutoDict- DE: "\373lS\336R8\023\000"- 00:06:37.001 [2024-07-15 19:01:17.219769] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:06:37.001 [2024-07-15 19:01:17.219888] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000ffff 00:06:37.002 [2024-07-15 19:01:17.220089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.002 [2024-07-15 19:01:17.220115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.002 [2024-07-15 19:01:17.220167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffb181ff cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.002 [2024-07-15 19:01:17.220182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.002 #37 NEW cov: 12196 ft: 15262 corp: 21/458b lim: 30 exec/s: 37 rss: 73Mb L: 16/27 MS: 1 CopyPart- 00:06:37.002 [2024-07-15 19:01:17.269925] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:06:37.002 [2024-07-15 19:01:17.270041] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:37.002 [2024-07-15 19:01:17.270144] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:37.002 [2024-07-15 19:01:17.270353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.002 [2024-07-15 19:01:17.270379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.002 [2024-07-15 19:01:17.270431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.002 [2024-07-15 19:01:17.270445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.002 [2024-07-15 19:01:17.270494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.002 [2024-07-15 19:01:17.270508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.002 #38 NEW cov: 12196 ft: 15322 corp: 22/480b lim: 30 exec/s: 38 rss: 73Mb L: 22/27 MS: 1 InsertByte- 00:06:37.002 [2024-07-15 19:01:17.310072] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.002 [2024-07-15 19:01:17.310190] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b5 00:06:37.002 [2024-07-15 19:01:17.310306] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (181960) > buf size (4096) 00:06:37.002 [2024-07-15 19:01:17.310411] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.002 [2024-07-15 19:01:17.310626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.002 [2024-07-15 19:01:17.310652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.002 [2024-07-15 19:01:17.310702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.002 [2024-07-15 19:01:17.310716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.002 [2024-07-15 19:01:17.310768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:b1b10001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.002 [2024-07-15 19:01:17.310782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.002 [2024-07-15 19:01:17.310833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.002 [2024-07-15 19:01:17.310846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.002 #39 NEW cov: 12196 ft: 15330 corp: 23/506b lim: 30 exec/s: 39 rss: 73Mb L: 26/27 MS: 1 ChangeBit- 00:06:37.002 [2024-07-15 19:01:17.350225] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.002 [2024-07-15 19:01:17.350341] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.002 [2024-07-15 19:01:17.350449] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.002 [2024-07-15 19:01:17.350554] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.002 [2024-07-15 19:01:17.350773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.002 [2024-07-15 19:01:17.350802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.002 [2024-07-15 19:01:17.350857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.002 [2024-07-15 19:01:17.350871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.002 [2024-07-15 19:01:17.350925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:b5b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.002 [2024-07-15 19:01:17.350940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.002 [2024-07-15 19:01:17.350991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.002 [2024-07-15 19:01:17.351004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.002 #40 NEW cov: 12196 ft: 15335 corp: 24/532b lim: 30 exec/s: 40 rss: 73Mb L: 26/27 MS: 1 ShuffleBytes- 00:06:37.002 [2024-07-15 19:01:17.390236] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.002 [2024-07-15 19:01:17.390366] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.002 [2024-07-15 19:01:17.390570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6fb181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.002 [2024-07-15 19:01:17.390596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.002 [2024-07-15 19:01:17.390648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.002 [2024-07-15 19:01:17.390661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.002 #41 NEW cov: 12196 ft: 15358 corp: 25/546b lim: 30 exec/s: 41 rss: 73Mb L: 14/27 MS: 1 CrossOver- 00:06:37.261 [2024-07-15 19:01:17.440489] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.261 [2024-07-15 19:01:17.440606] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.261 [2024-07-15 19:01:17.440714] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000de52 00:06:37.261 [2024-07-15 19:01:17.440817] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.261 [2024-07-15 19:01:17.441028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.261 [2024-07-15 19:01:17.441054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.261 [2024-07-15 19:01:17.441107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.261 [2024-07-15 19:01:17.441121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.261 [2024-07-15 19:01:17.441175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:b1fb836c cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.261 [2024-07-15 19:01:17.441189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.261 [2024-07-15 19:01:17.441244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:38138100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.261 [2024-07-15 19:01:17.441260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.261 #42 NEW cov: 12196 ft: 15433 corp: 26/572b lim: 30 exec/s: 42 rss: 73Mb L: 26/27 MS: 1 CrossOver- 00:06:37.261 [2024-07-15 19:01:17.490527] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:37.261 [2024-07-15 19:01:17.490660] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:37.261 [2024-07-15 19:01:17.490863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.261 [2024-07-15 19:01:17.490889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.261 [2024-07-15 19:01:17.490943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.261 [2024-07-15 19:01:17.490957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.261 #43 NEW cov: 12196 ft: 15441 corp: 27/584b lim: 30 exec/s: 43 rss: 73Mb L: 12/27 MS: 1 CopyPart- 00:06:37.261 [2024-07-15 19:01:17.540704] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.261 [2024-07-15 19:01:17.540821] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.261 [2024-07-15 19:01:17.540929] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000b1de 00:06:37.261 [2024-07-15 19:01:17.541030] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xb1b1 00:06:37.261 [2024-07-15 19:01:17.541229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.261 [2024-07-15 19:01:17.541272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.261 [2024-07-15 19:01:17.541325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.261 [2024-07-15 19:01:17.541340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.261 [2024-07-15 19:01:17.541392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:b1fb836c cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.261 [2024-07-15 19:01:17.541406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.261 [2024-07-15 19:01:17.541456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:52380013 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.261 [2024-07-15 19:01:17.541470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.261 #44 NEW cov: 12196 ft: 15461 corp: 28/611b lim: 30 exec/s: 44 rss: 73Mb L: 27/27 MS: 1 CopyPart- 00:06:37.262 [2024-07-15 19:01:17.590837] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x40ff 00:06:37.262 [2024-07-15 19:01:17.590970] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:37.262 [2024-07-15 19:01:17.591079] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:37.262 [2024-07-15 19:01:17.591297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:010000cc cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.262 [2024-07-15 19:01:17.591324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.262 [2024-07-15 19:01:17.591389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.262 [2024-07-15 19:01:17.591407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.262 [2024-07-15 19:01:17.591460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.262 [2024-07-15 19:01:17.591473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.262 #45 NEW cov: 12196 ft: 15516 corp: 29/634b lim: 30 exec/s: 45 rss: 73Mb L: 23/27 MS: 1 InsertByte- 00:06:37.262 [2024-07-15 19:01:17.640964] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.262 [2024-07-15 19:01:17.641080] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b5 00:06:37.262 [2024-07-15 19:01:17.641189] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xb1b1 00:06:37.262 [2024-07-15 19:01:17.641398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.262 [2024-07-15 19:01:17.641423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.262 [2024-07-15 19:01:17.641478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.262 [2024-07-15 19:01:17.641492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.262 [2024-07-15 19:01:17.641543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:b1b10001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.262 [2024-07-15 19:01:17.641557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.262 #46 NEW cov: 12196 ft: 15528 corp: 30/657b lim: 30 exec/s: 46 rss: 73Mb L: 23/27 MS: 1 EraseBytes- 00:06:37.521 [2024-07-15 19:01:17.691150] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.521 [2024-07-15 19:01:17.691274] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.521 [2024-07-15 19:01:17.691382] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.521 [2024-07-15 19:01:17.691492] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.521 [2024-07-15 19:01:17.691715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.521 [2024-07-15 19:01:17.691742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.521 [2024-07-15 19:01:17.691797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.521 [2024-07-15 19:01:17.691812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.521 [2024-07-15 19:01:17.691868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.521 [2024-07-15 19:01:17.691882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.521 [2024-07-15 19:01:17.691935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.521 [2024-07-15 19:01:17.691950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.521 #47 NEW cov: 12196 ft: 15568 corp: 31/686b lim: 30 exec/s: 47 rss: 73Mb L: 29/29 MS: 1 CrossOver- 00:06:37.521 [2024-07-15 19:01:17.731196] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x40ff 00:06:37.521 [2024-07-15 19:01:17.731339] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:37.521 [2024-07-15 19:01:17.731456] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:37.521 [2024-07-15 19:01:17.731668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:010000cc cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.521 [2024-07-15 19:01:17.731693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.521 [2024-07-15 19:01:17.731750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.521 [2024-07-15 19:01:17.731764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.521 [2024-07-15 19:01:17.731818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.521 [2024-07-15 19:01:17.731831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.521 #48 NEW cov: 12196 ft: 15580 corp: 32/709b lim: 30 exec/s: 48 rss: 74Mb L: 23/29 MS: 1 ShuffleBytes- 00:06:37.521 [2024-07-15 19:01:17.781368] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.521 [2024-07-15 19:01:17.781502] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.521 [2024-07-15 19:01:17.781611] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.521 [2024-07-15 19:01:17.781717] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.521 [2024-07-15 19:01:17.781940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.521 [2024-07-15 19:01:17.781966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.521 [2024-07-15 19:01:17.782020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.521 [2024-07-15 19:01:17.782034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.521 [2024-07-15 19:01:17.782088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:b5b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.521 [2024-07-15 19:01:17.782102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.521 [2024-07-15 19:01:17.782155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:b1b18131 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.521 [2024-07-15 19:01:17.782168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.521 #49 NEW cov: 12196 ft: 15585 corp: 33/735b lim: 30 exec/s: 49 rss: 74Mb L: 26/29 MS: 1 ChangeBit- 00:06:37.521 [2024-07-15 19:01:17.821476] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.521 [2024-07-15 19:01:17.821613] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.521 [2024-07-15 19:01:17.821721] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.521 [2024-07-15 19:01:17.821827] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.521 [2024-07-15 19:01:17.822031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.521 [2024-07-15 19:01:17.822057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.521 [2024-07-15 19:01:17.822113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.521 [2024-07-15 19:01:17.822128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.521 [2024-07-15 19:01:17.822181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.521 [2024-07-15 19:01:17.822195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.521 [2024-07-15 19:01:17.822247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.522 [2024-07-15 19:01:17.822261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.522 #50 NEW cov: 12196 ft: 15604 corp: 34/764b lim: 30 exec/s: 50 rss: 74Mb L: 29/29 MS: 1 CopyPart- 00:06:37.522 [2024-07-15 19:01:17.871699] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:37.522 [2024-07-15 19:01:17.871927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01ff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.522 [2024-07-15 19:01:17.871953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.522 [2024-07-15 19:01:17.872005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.522 [2024-07-15 19:01:17.872019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.522 #51 NEW cov: 12213 ft: 15649 corp: 35/779b lim: 30 exec/s: 51 rss: 74Mb L: 15/29 MS: 1 InsertRepeatedBytes- 00:06:37.522 [2024-07-15 19:01:17.911768] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.522 [2024-07-15 19:01:17.911887] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.522 [2024-07-15 19:01:17.911996] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1fb 00:06:37.522 [2024-07-15 19:01:17.912098] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003813 00:06:37.522 [2024-07-15 19:01:17.912310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.522 [2024-07-15 19:01:17.912335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.522 [2024-07-15 19:01:17.912389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:21b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.522 [2024-07-15 19:01:17.912403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.522 [2024-07-15 19:01:17.912455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:b1b581b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.522 [2024-07-15 19:01:17.912468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.522 [2024-07-15 19:01:17.912520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:6c5302de cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.522 [2024-07-15 19:01:17.912533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.522 #52 NEW cov: 12213 ft: 15657 corp: 36/806b lim: 30 exec/s: 52 rss: 74Mb L: 27/29 MS: 1 ChangeBit- 00:06:37.780 [2024-07-15 19:01:17.961869] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x40ff 00:06:37.780 [2024-07-15 19:01:17.961999] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:37.780 [2024-07-15 19:01:17.962112] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:06:37.780 [2024-07-15 19:01:17.962323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:010000cc cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.780 [2024-07-15 19:01:17.962349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.780 [2024-07-15 19:01:17.962404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.780 [2024-07-15 19:01:17.962419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.780 [2024-07-15 19:01:17.962471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.780 [2024-07-15 19:01:17.962484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.780 #53 NEW cov: 12213 ft: 15669 corp: 37/829b lim: 30 exec/s: 53 rss: 74Mb L: 23/29 MS: 1 ChangeBinInt- 00:06:37.780 [2024-07-15 19:01:18.002019] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (444104) > buf size (4096) 00:06:37.780 [2024-07-15 19:01:18.002151] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1a 00:06:37.780 [2024-07-15 19:01:18.002266] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000de52 00:06:37.780 [2024-07-15 19:01:18.002374] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.780 [2024-07-15 19:01:18.002596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:b1b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.780 [2024-07-15 19:01:18.002622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.780 [2024-07-15 19:01:18.002678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.780 [2024-07-15 19:01:18.002693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.780 [2024-07-15 19:01:18.002745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:b1fb836c cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.780 [2024-07-15 19:01:18.002759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.780 [2024-07-15 19:01:18.002812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:38138100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.780 [2024-07-15 19:01:18.002825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.780 #54 NEW cov: 12213 ft: 15676 corp: 38/855b lim: 30 exec/s: 54 rss: 74Mb L: 26/29 MS: 1 ChangeBinInt- 00:06:37.780 [2024-07-15 19:01:18.042072] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:06:37.780 [2024-07-15 19:01:18.042185] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b1b1 00:06:37.780 [2024-07-15 19:01:18.042410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.780 [2024-07-15 19:01:18.042436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.780 [2024-07-15 19:01:18.042491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00b181b1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.780 [2024-07-15 19:01:18.042505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.780 #55 NEW cov: 12213 ft: 15736 corp: 39/871b lim: 30 exec/s: 27 rss: 74Mb L: 16/29 MS: 1 CrossOver- 00:06:37.780 #55 DONE cov: 12213 ft: 15736 corp: 39/871b lim: 30 exec/s: 27 rss: 74Mb 00:06:37.780 ###### Recommended dictionary. ###### 00:06:37.780 "\001\000\000\000" # Uses: 3 00:06:37.780 "\373lS\336R8\023\000" # Uses: 1 00:06:37.780 ###### End of recommended dictionary. ###### 00:06:37.780 Done 55 runs in 2 second(s) 00:06:37.780 19:01:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:06:37.780 19:01:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:37.780 19:01:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:37.780 19:01:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:06:37.780 19:01:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:06:37.780 19:01:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:37.780 19:01:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:37.780 19:01:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:37.780 19:01:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:06:37.780 19:01:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:37.780 19:01:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:38.039 19:01:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:06:38.039 19:01:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:06:38.039 19:01:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:38.039 19:01:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:06:38.039 19:01:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:38.039 19:01:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:38.039 19:01:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:38.039 19:01:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:06:38.039 [2024-07-15 19:01:18.251282] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:38.039 [2024-07-15 19:01:18.251358] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid669774 ] 00:06:38.039 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.039 [2024-07-15 19:01:18.462255] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.298 [2024-07-15 19:01:18.534378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.298 [2024-07-15 19:01:18.594162] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:38.298 [2024-07-15 19:01:18.610485] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:06:38.298 INFO: Running with entropic power schedule (0xFF, 100). 00:06:38.298 INFO: Seed: 3431949079 00:06:38.298 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:38.298 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:38.298 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:38.298 INFO: A corpus is not provided, starting from an empty corpus 00:06:38.298 #2 INITED exec/s: 0 rss: 65Mb 00:06:38.298 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:38.298 This may also happen if the target rejected all inputs we tried so far 00:06:38.298 [2024-07-15 19:01:18.681105] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:38.298 [2024-07-15 19:01:18.681403] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:38.298 [2024-07-15 19:01:18.681671] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:38.298 [2024-07-15 19:01:18.682168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.298 [2024-07-15 19:01:18.682222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.298 [2024-07-15 19:01:18.682331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.298 [2024-07-15 19:01:18.682352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.298 [2024-07-15 19:01:18.682450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.298 [2024-07-15 19:01:18.682470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.864 NEW_FUNC[1/695]: 0x487230 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:06:38.864 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:38.864 #17 NEW cov: 11884 ft: 11884 corp: 2/25b lim: 35 exec/s: 0 rss: 72Mb L: 24/24 MS: 5 ChangeBit-InsertByte-CMP-CMP-InsertRepeatedBytes- DE: "\377\004"-"#\000\000\000"- 00:06:38.864 [2024-07-15 19:01:19.021652] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:38.864 [2024-07-15 19:01:19.021909] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:38.864 [2024-07-15 19:01:19.022145] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:38.864 [2024-07-15 19:01:19.022604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.864 [2024-07-15 19:01:19.022654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.864 [2024-07-15 19:01:19.022752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.864 [2024-07-15 19:01:19.022775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.864 [2024-07-15 19:01:19.022864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.864 [2024-07-15 19:01:19.022888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.864 #18 NEW cov: 12014 ft: 12376 corp: 3/50b lim: 35 exec/s: 0 rss: 72Mb L: 25/25 MS: 1 InsertByte- 00:06:38.864 [2024-07-15 19:01:19.091889] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:38.864 [2024-07-15 19:01:19.092144] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:38.864 [2024-07-15 19:01:19.092395] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:38.864 [2024-07-15 19:01:19.092828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.864 [2024-07-15 19:01:19.092861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.864 [2024-07-15 19:01:19.092949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.864 [2024-07-15 19:01:19.092968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.864 [2024-07-15 19:01:19.093055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.864 [2024-07-15 19:01:19.093073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.864 #19 NEW cov: 12020 ft: 12567 corp: 4/77b lim: 35 exec/s: 0 rss: 72Mb L: 27/27 MS: 1 InsertRepeatedBytes- 00:06:38.864 [2024-07-15 19:01:19.142395] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:38.864 [2024-07-15 19:01:19.142647] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:38.864 [2024-07-15 19:01:19.142874] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:38.864 [2024-07-15 19:01:19.143304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:044a00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.865 [2024-07-15 19:01:19.143335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.865 [2024-07-15 19:01:19.143429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.865 [2024-07-15 19:01:19.143447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.865 [2024-07-15 19:01:19.143534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.865 [2024-07-15 19:01:19.143553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.865 [2024-07-15 19:01:19.143637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.865 [2024-07-15 19:01:19.143655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.865 #24 NEW cov: 12115 ft: 13328 corp: 5/109b lim: 35 exec/s: 0 rss: 72Mb L: 32/32 MS: 5 ChangeBit-ShuffleBytes-InsertByte-PersAutoDict-InsertRepeatedBytes- DE: "\377\004"- 00:06:38.865 [2024-07-15 19:01:19.192579] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:38.865 [2024-07-15 19:01:19.192842] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:38.865 [2024-07-15 19:01:19.193088] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:38.865 [2024-07-15 19:01:19.193531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:044a00ff cdw11:0000007f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.865 [2024-07-15 19:01:19.193561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.865 [2024-07-15 19:01:19.193646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.865 [2024-07-15 19:01:19.193665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.865 [2024-07-15 19:01:19.193756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.865 [2024-07-15 19:01:19.193774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.865 [2024-07-15 19:01:19.193861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.865 [2024-07-15 19:01:19.193879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.865 #25 NEW cov: 12115 ft: 13387 corp: 6/142b lim: 35 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 InsertByte- 00:06:38.865 [2024-07-15 19:01:19.252519] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:38.865 [2024-07-15 19:01:19.252777] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:38.865 [2024-07-15 19:01:19.253017] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:38.865 [2024-07-15 19:01:19.253279] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:38.865 [2024-07-15 19:01:19.253727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.865 [2024-07-15 19:01:19.253758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.865 [2024-07-15 19:01:19.253850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.865 [2024-07-15 19:01:19.253868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.865 [2024-07-15 19:01:19.253954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.865 [2024-07-15 19:01:19.253971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.865 [2024-07-15 19:01:19.254055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.865 [2024-07-15 19:01:19.254073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.865 #26 NEW cov: 12115 ft: 13501 corp: 7/175b lim: 35 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 CopyPart- 00:06:39.124 [2024-07-15 19:01:19.302630] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.124 [2024-07-15 19:01:19.302904] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.124 [2024-07-15 19:01:19.303159] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.124 [2024-07-15 19:01:19.303632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:23000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.124 [2024-07-15 19:01:19.303662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.124 [2024-07-15 19:01:19.303767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.124 [2024-07-15 19:01:19.303786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.124 [2024-07-15 19:01:19.303869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.124 [2024-07-15 19:01:19.303890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.124 #27 NEW cov: 12115 ft: 13569 corp: 8/200b lim: 35 exec/s: 0 rss: 72Mb L: 25/33 MS: 1 CopyPart- 00:06:39.124 [2024-07-15 19:01:19.363239] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.124 [2024-07-15 19:01:19.363520] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.124 [2024-07-15 19:01:19.363765] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.124 [2024-07-15 19:01:19.364210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:044a00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.124 [2024-07-15 19:01:19.364242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.124 [2024-07-15 19:01:19.364328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.124 [2024-07-15 19:01:19.364345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.124 [2024-07-15 19:01:19.364436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.124 [2024-07-15 19:01:19.364452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.124 [2024-07-15 19:01:19.364542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.124 [2024-07-15 19:01:19.364562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.124 #28 NEW cov: 12115 ft: 13685 corp: 9/233b lim: 35 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 CrossOver- 00:06:39.124 [2024-07-15 19:01:19.413870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3a11000a cdw11:11001111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.124 [2024-07-15 19:01:19.413894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.124 [2024-07-15 19:01:19.413980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:11110011 cdw11:11001111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.124 [2024-07-15 19:01:19.413998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.124 [2024-07-15 19:01:19.414086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:11110011 cdw11:11001111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.124 [2024-07-15 19:01:19.414103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.124 #30 NEW cov: 12115 ft: 13898 corp: 10/259b lim: 35 exec/s: 0 rss: 72Mb L: 26/33 MS: 2 InsertByte-InsertRepeatedBytes- 00:06:39.124 [2024-07-15 19:01:19.463402] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.124 [2024-07-15 19:01:19.463666] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.124 [2024-07-15 19:01:19.463914] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.124 [2024-07-15 19:01:19.464372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:044a00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.124 [2024-07-15 19:01:19.464399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.124 [2024-07-15 19:01:19.464491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.124 [2024-07-15 19:01:19.464510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.124 [2024-07-15 19:01:19.464595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:78000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.124 [2024-07-15 19:01:19.464617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.124 [2024-07-15 19:01:19.464706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.124 [2024-07-15 19:01:19.464724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.124 #31 NEW cov: 12115 ft: 13934 corp: 11/292b lim: 35 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 InsertByte- 00:06:39.125 [2024-07-15 19:01:19.513351] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.125 [2024-07-15 19:01:19.513627] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.125 [2024-07-15 19:01:19.513881] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.125 [2024-07-15 19:01:19.514334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:23000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.125 [2024-07-15 19:01:19.514362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.125 [2024-07-15 19:01:19.514453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.125 [2024-07-15 19:01:19.514471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.125 [2024-07-15 19:01:19.514558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:23000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.125 [2024-07-15 19:01:19.514578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.125 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:39.125 #32 NEW cov: 12138 ft: 13969 corp: 12/317b lim: 35 exec/s: 0 rss: 72Mb L: 25/33 MS: 1 CrossOver- 00:06:39.384 [2024-07-15 19:01:19.574757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:044a00ff cdw11:0000007f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.384 [2024-07-15 19:01:19.574784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.384 [2024-07-15 19:01:19.574876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:11110011 cdw11:11001111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.384 [2024-07-15 19:01:19.574892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.384 [2024-07-15 19:01:19.574972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:11110011 cdw11:11001111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.384 [2024-07-15 19:01:19.574988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.384 [2024-07-15 19:01:19.575094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:11110011 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.384 [2024-07-15 19:01:19.575111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.384 #33 NEW cov: 12138 ft: 14024 corp: 13/350b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 CrossOver- 00:06:39.384 [2024-07-15 19:01:19.634038] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.384 [2024-07-15 19:01:19.634292] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.384 [2024-07-15 19:01:19.634555] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.384 [2024-07-15 19:01:19.635014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:044a00ff cdw11:0000007f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.384 [2024-07-15 19:01:19.635043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.384 [2024-07-15 19:01:19.635132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.384 [2024-07-15 19:01:19.635150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.384 [2024-07-15 19:01:19.635239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.384 [2024-07-15 19:01:19.635258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.384 [2024-07-15 19:01:19.635342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.384 [2024-07-15 19:01:19.635363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.384 #34 NEW cov: 12138 ft: 14044 corp: 14/383b lim: 35 exec/s: 34 rss: 73Mb L: 33/33 MS: 1 CopyPart- 00:06:39.384 [2024-07-15 19:01:19.684039] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.384 [2024-07-15 19:01:19.684321] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.384 [2024-07-15 19:01:19.684572] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.384 [2024-07-15 19:01:19.685053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.384 [2024-07-15 19:01:19.685083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.384 [2024-07-15 19:01:19.685166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.384 [2024-07-15 19:01:19.685186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.384 [2024-07-15 19:01:19.685276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.384 [2024-07-15 19:01:19.685295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.384 #35 NEW cov: 12138 ft: 14125 corp: 15/410b lim: 35 exec/s: 35 rss: 73Mb L: 27/33 MS: 1 ChangeBit- 00:06:39.384 [2024-07-15 19:01:19.754242] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.384 [2024-07-15 19:01:19.754500] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.384 [2024-07-15 19:01:19.754745] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.384 [2024-07-15 19:01:19.754984] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.384 [2024-07-15 19:01:19.755434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.384 [2024-07-15 19:01:19.755464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.384 [2024-07-15 19:01:19.755548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.384 [2024-07-15 19:01:19.755567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.384 [2024-07-15 19:01:19.755656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.384 [2024-07-15 19:01:19.755675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.384 [2024-07-15 19:01:19.755758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:2e2e0000 cdw11:2e002e2e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.384 [2024-07-15 19:01:19.755776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.384 #36 NEW cov: 12138 ft: 14196 corp: 16/443b lim: 35 exec/s: 36 rss: 73Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:06:39.643 [2024-07-15 19:01:19.824641] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.643 [2024-07-15 19:01:19.825119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:044a00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.643 [2024-07-15 19:01:19.825148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.643 [2024-07-15 19:01:19.825245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.643 [2024-07-15 19:01:19.825263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.643 #37 NEW cov: 12138 ft: 14426 corp: 17/460b lim: 35 exec/s: 37 rss: 73Mb L: 17/33 MS: 1 EraseBytes- 00:06:39.643 #40 NEW cov: 12138 ft: 14665 corp: 18/467b lim: 35 exec/s: 40 rss: 73Mb L: 7/33 MS: 3 CopyPart-ChangeBit-InsertRepeatedBytes- 00:06:39.643 [2024-07-15 19:01:19.945302] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.643 [2024-07-15 19:01:19.945581] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.643 [2024-07-15 19:01:19.945822] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.643 [2024-07-15 19:01:19.946257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:044a00ff cdw11:fd000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.643 [2024-07-15 19:01:19.946288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.643 [2024-07-15 19:01:19.946377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.643 [2024-07-15 19:01:19.946397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.643 [2024-07-15 19:01:19.946492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.643 [2024-07-15 19:01:19.946511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.643 [2024-07-15 19:01:19.946601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.643 [2024-07-15 19:01:19.946622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.643 #41 NEW cov: 12138 ft: 14670 corp: 19/499b lim: 35 exec/s: 41 rss: 73Mb L: 32/33 MS: 1 ChangeByte- 00:06:39.643 [2024-07-15 19:01:19.995823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dcdc00dc cdw11:dc00dcdc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.643 [2024-07-15 19:01:19.995850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.643 [2024-07-15 19:01:19.995960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:dcdc00dc cdw11:dc00dcdc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.643 [2024-07-15 19:01:19.995977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.643 #42 NEW cov: 12138 ft: 14747 corp: 20/517b lim: 35 exec/s: 42 rss: 73Mb L: 18/33 MS: 1 InsertRepeatedBytes- 00:06:39.643 [2024-07-15 19:01:20.046375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3a11000a cdw11:11001111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.643 [2024-07-15 19:01:20.046407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.643 [2024-07-15 19:01:20.046481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:28110011 cdw11:11001111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.643 [2024-07-15 19:01:20.046497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.643 [2024-07-15 19:01:20.046587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:11110011 cdw11:11001111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.643 [2024-07-15 19:01:20.046603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.902 #43 NEW cov: 12138 ft: 14764 corp: 21/544b lim: 35 exec/s: 43 rss: 73Mb L: 27/33 MS: 1 InsertByte- 00:06:39.902 [2024-07-15 19:01:20.115754] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.902 [2024-07-15 19:01:20.116039] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.902 [2024-07-15 19:01:20.116311] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.902 [2024-07-15 19:01:20.116786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00001800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.902 [2024-07-15 19:01:20.116818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.902 [2024-07-15 19:01:20.116900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.902 [2024-07-15 19:01:20.116918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.902 [2024-07-15 19:01:20.117007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.902 [2024-07-15 19:01:20.117025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.902 #44 NEW cov: 12138 ft: 14778 corp: 22/568b lim: 35 exec/s: 44 rss: 73Mb L: 24/33 MS: 1 ChangeBinInt- 00:06:39.902 [2024-07-15 19:01:20.166296] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.902 [2024-07-15 19:01:20.166771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:044a00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.902 [2024-07-15 19:01:20.166798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.902 [2024-07-15 19:01:20.166887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.902 [2024-07-15 19:01:20.166905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.902 #45 NEW cov: 12138 ft: 14815 corp: 23/585b lim: 35 exec/s: 45 rss: 73Mb L: 17/33 MS: 1 PersAutoDict- DE: "#\000\000\000"- 00:06:39.902 [2024-07-15 19:01:20.226696] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.902 [2024-07-15 19:01:20.226968] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.902 [2024-07-15 19:01:20.227234] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.902 [2024-07-15 19:01:20.227681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:044a00ff cdw11:fd000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.902 [2024-07-15 19:01:20.227709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.902 [2024-07-15 19:01:20.227795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.902 [2024-07-15 19:01:20.227815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.902 [2024-07-15 19:01:20.227896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.902 [2024-07-15 19:01:20.227914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.902 [2024-07-15 19:01:20.228000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.902 [2024-07-15 19:01:20.228018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.902 #46 NEW cov: 12138 ft: 14852 corp: 24/617b lim: 35 exec/s: 46 rss: 73Mb L: 32/33 MS: 1 ChangeByte- 00:06:39.902 [2024-07-15 19:01:20.286853] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.902 [2024-07-15 19:01:20.287110] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.902 [2024-07-15 19:01:20.287377] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.902 [2024-07-15 19:01:20.287829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:044a00ff cdw11:fd000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.902 [2024-07-15 19:01:20.287858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.902 [2024-07-15 19:01:20.287948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.902 [2024-07-15 19:01:20.287967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.902 [2024-07-15 19:01:20.288057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.902 [2024-07-15 19:01:20.288076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.902 [2024-07-15 19:01:20.288158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.902 [2024-07-15 19:01:20.288175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.902 #47 NEW cov: 12138 ft: 14883 corp: 25/649b lim: 35 exec/s: 47 rss: 73Mb L: 32/33 MS: 1 ChangeByte- 00:06:40.161 [2024-07-15 19:01:20.358157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:044a00ff cdw11:0000007f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.161 [2024-07-15 19:01:20.358184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.161 [2024-07-15 19:01:20.358277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:11000011 cdw11:11001111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.161 [2024-07-15 19:01:20.358297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.161 [2024-07-15 19:01:20.358378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:11110011 cdw11:11001111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.161 [2024-07-15 19:01:20.358394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.161 [2024-07-15 19:01:20.358480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:11110011 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.161 [2024-07-15 19:01:20.358497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.161 #48 NEW cov: 12138 ft: 14893 corp: 26/682b lim: 35 exec/s: 48 rss: 73Mb L: 33/33 MS: 1 ShuffleBytes- 00:06:40.161 [2024-07-15 19:01:20.417273] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.161 [2024-07-15 19:01:20.417541] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.161 [2024-07-15 19:01:20.417787] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.161 [2024-07-15 19:01:20.418047] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.161 [2024-07-15 19:01:20.418510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000400 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.161 [2024-07-15 19:01:20.418540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.161 [2024-07-15 19:01:20.418622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.161 [2024-07-15 19:01:20.418642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.161 [2024-07-15 19:01:20.418725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.161 [2024-07-15 19:01:20.418743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.161 [2024-07-15 19:01:20.418827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:2e2e0000 cdw11:2e002e2e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.161 [2024-07-15 19:01:20.418845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.161 #49 NEW cov: 12138 ft: 14903 corp: 27/715b lim: 35 exec/s: 49 rss: 73Mb L: 33/33 MS: 1 ChangeBinInt- 00:06:40.161 [2024-07-15 19:01:20.477381] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.161 [2024-07-15 19:01:20.477640] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.161 [2024-07-15 19:01:20.478310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.162 [2024-07-15 19:01:20.478341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.162 [2024-07-15 19:01:20.478425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:4a000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.162 [2024-07-15 19:01:20.478444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.162 [2024-07-15 19:01:20.478530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:0000007f cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.162 [2024-07-15 19:01:20.478551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.162 #50 NEW cov: 12138 ft: 14932 corp: 28/742b lim: 35 exec/s: 50 rss: 73Mb L: 27/33 MS: 1 CrossOver- 00:06:40.162 [2024-07-15 19:01:20.527516] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.162 [2024-07-15 19:01:20.527792] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.162 [2024-07-15 19:01:20.528042] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.162 [2024-07-15 19:01:20.528481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.162 [2024-07-15 19:01:20.528511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.162 [2024-07-15 19:01:20.528589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.162 [2024-07-15 19:01:20.528607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.162 [2024-07-15 19:01:20.528692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.162 [2024-07-15 19:01:20.528711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.162 #51 NEW cov: 12138 ft: 14957 corp: 29/767b lim: 35 exec/s: 51 rss: 73Mb L: 25/33 MS: 1 ShuffleBytes- 00:06:40.162 [2024-07-15 19:01:20.577706] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.162 [2024-07-15 19:01:20.577966] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.162 [2024-07-15 19:01:20.578231] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.162 [2024-07-15 19:01:20.578703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.162 [2024-07-15 19:01:20.578731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.162 [2024-07-15 19:01:20.578820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.162 [2024-07-15 19:01:20.578837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.162 [2024-07-15 19:01:20.578919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.162 [2024-07-15 19:01:20.578936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.421 #52 NEW cov: 12138 ft: 14976 corp: 30/791b lim: 35 exec/s: 52 rss: 73Mb L: 24/33 MS: 1 ShuffleBytes- 00:06:40.421 [2024-07-15 19:01:20.628034] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.421 [2024-07-15 19:01:20.628543] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.421 [2024-07-15 19:01:20.628781] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.421 [2024-07-15 19:01:20.629224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.421 [2024-07-15 19:01:20.629253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.421 [2024-07-15 19:01:20.629335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0000003f cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.421 [2024-07-15 19:01:20.629356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.421 [2024-07-15 19:01:20.629439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.421 [2024-07-15 19:01:20.629456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.421 [2024-07-15 19:01:20.629547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:2e2e0000 cdw11:2e002e2e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.421 [2024-07-15 19:01:20.629564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.421 #53 NEW cov: 12138 ft: 15009 corp: 31/824b lim: 35 exec/s: 53 rss: 73Mb L: 33/33 MS: 1 ChangeByte- 00:06:40.421 [2024-07-15 19:01:20.678118] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.421 [2024-07-15 19:01:20.678373] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.421 [2024-07-15 19:01:20.678642] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.421 [2024-07-15 19:01:20.679103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.422 [2024-07-15 19:01:20.679133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.422 [2024-07-15 19:01:20.679223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.422 [2024-07-15 19:01:20.679243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.422 [2024-07-15 19:01:20.679333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000023 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.422 [2024-07-15 19:01:20.679353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.422 #54 NEW cov: 12138 ft: 15016 corp: 32/849b lim: 35 exec/s: 27 rss: 73Mb L: 25/33 MS: 1 EraseBytes- 00:06:40.422 #54 DONE cov: 12138 ft: 15016 corp: 32/849b lim: 35 exec/s: 27 rss: 73Mb 00:06:40.422 ###### Recommended dictionary. ###### 00:06:40.422 "\377\004" # Uses: 1 00:06:40.422 "#\000\000\000" # Uses: 1 00:06:40.422 ###### End of recommended dictionary. ###### 00:06:40.422 Done 54 runs in 2 second(s) 00:06:40.422 19:01:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:06:40.422 19:01:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:40.422 19:01:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:40.422 19:01:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:06:40.422 19:01:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:06:40.422 19:01:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:40.422 19:01:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:40.422 19:01:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:40.422 19:01:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:06:40.422 19:01:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:40.422 19:01:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:40.422 19:01:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:06:40.422 19:01:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:06:40.422 19:01:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:40.422 19:01:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:06:40.422 19:01:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:40.422 19:01:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:40.422 19:01:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:40.422 19:01:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:06:40.681 [2024-07-15 19:01:20.870666] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:40.681 [2024-07-15 19:01:20.870736] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid670145 ] 00:06:40.681 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.681 [2024-07-15 19:01:21.079795] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.940 [2024-07-15 19:01:21.152079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.940 [2024-07-15 19:01:21.212101] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:40.940 [2024-07-15 19:01:21.228407] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:06:40.940 INFO: Running with entropic power schedule (0xFF, 100). 00:06:40.940 INFO: Seed: 1755995918 00:06:40.940 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:40.940 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:40.940 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:40.940 INFO: A corpus is not provided, starting from an empty corpus 00:06:40.940 #2 INITED exec/s: 0 rss: 65Mb 00:06:40.940 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:40.940 This may also happen if the target rejected all inputs we tried so far 00:06:41.199 NEW_FUNC[1/684]: 0x488f00 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:06:41.199 NEW_FUNC[2/684]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:41.458 #19 NEW cov: 11796 ft: 11796 corp: 2/10b lim: 20 exec/s: 0 rss: 71Mb L: 9/9 MS: 2 ChangeByte-InsertRepeatedBytes- 00:06:41.458 #20 NEW cov: 11926 ft: 12415 corp: 3/19b lim: 20 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ChangeBinInt- 00:06:41.458 #21 NEW cov: 11932 ft: 12623 corp: 4/28b lim: 20 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:06:41.458 #22 NEW cov: 12017 ft: 12792 corp: 5/38b lim: 20 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 InsertByte- 00:06:41.458 #28 NEW cov: 12017 ft: 13177 corp: 6/45b lim: 20 exec/s: 0 rss: 72Mb L: 7/10 MS: 1 EraseBytes- 00:06:41.729 #29 NEW cov: 12017 ft: 13331 corp: 7/53b lim: 20 exec/s: 0 rss: 72Mb L: 8/10 MS: 1 CrossOver- 00:06:41.729 #30 NEW cov: 12017 ft: 13381 corp: 8/57b lim: 20 exec/s: 0 rss: 72Mb L: 4/10 MS: 1 EraseBytes- 00:06:41.729 #31 NEW cov: 12034 ft: 13813 corp: 9/76b lim: 20 exec/s: 0 rss: 72Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:06:41.729 #32 NEW cov: 12034 ft: 13862 corp: 10/86b lim: 20 exec/s: 0 rss: 72Mb L: 10/19 MS: 1 ChangeBinInt- 00:06:41.729 #33 NEW cov: 12034 ft: 13919 corp: 11/95b lim: 20 exec/s: 0 rss: 72Mb L: 9/19 MS: 1 ChangeBit- 00:06:42.001 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:42.001 #34 NEW cov: 12061 ft: 14077 corp: 12/108b lim: 20 exec/s: 0 rss: 72Mb L: 13/19 MS: 1 CMP- DE: "\010\000\000\000"- 00:06:42.001 #35 NEW cov: 12061 ft: 14107 corp: 13/112b lim: 20 exec/s: 0 rss: 72Mb L: 4/19 MS: 1 ChangeBinInt- 00:06:42.001 #36 NEW cov: 12061 ft: 14126 corp: 14/126b lim: 20 exec/s: 36 rss: 72Mb L: 14/19 MS: 1 InsertRepeatedBytes- 00:06:42.001 NEW_FUNC[1/4]: 0x11db1b0 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3359 00:06:42.001 NEW_FUNC[2/4]: 0x11dbd30 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3301 00:06:42.001 #37 NEW cov: 12143 ft: 14247 corp: 15/137b lim: 20 exec/s: 37 rss: 72Mb L: 11/19 MS: 1 PersAutoDict- DE: "\010\000\000\000"- 00:06:42.001 #38 NEW cov: 12143 ft: 14249 corp: 16/147b lim: 20 exec/s: 38 rss: 72Mb L: 10/19 MS: 1 InsertByte- 00:06:42.260 #39 NEW cov: 12143 ft: 14265 corp: 17/160b lim: 20 exec/s: 39 rss: 73Mb L: 13/19 MS: 1 ChangeBinInt- 00:06:42.260 #44 NEW cov: 12143 ft: 14283 corp: 18/178b lim: 20 exec/s: 44 rss: 73Mb L: 18/19 MS: 5 ChangeByte-ChangeBit-ChangeByte-CrossOver-InsertRepeatedBytes- 00:06:42.260 #45 NEW cov: 12143 ft: 14312 corp: 19/182b lim: 20 exec/s: 45 rss: 73Mb L: 4/19 MS: 1 ChangeBit- 00:06:42.260 #46 NEW cov: 12143 ft: 14319 corp: 20/191b lim: 20 exec/s: 46 rss: 73Mb L: 9/19 MS: 1 ChangeBit- 00:06:42.260 #47 NEW cov: 12143 ft: 14374 corp: 21/200b lim: 20 exec/s: 47 rss: 73Mb L: 9/19 MS: 1 PersAutoDict- DE: "\010\000\000\000"- 00:06:42.519 #48 NEW cov: 12143 ft: 14388 corp: 22/205b lim: 20 exec/s: 48 rss: 73Mb L: 5/19 MS: 1 EraseBytes- 00:06:42.519 #51 NEW cov: 12143 ft: 14395 corp: 23/211b lim: 20 exec/s: 51 rss: 73Mb L: 6/19 MS: 3 CopyPart-CopyPart-PersAutoDict- DE: "\010\000\000\000"- 00:06:42.519 #52 NEW cov: 12143 ft: 14463 corp: 24/224b lim: 20 exec/s: 52 rss: 73Mb L: 13/19 MS: 1 PersAutoDict- DE: "\010\000\000\000"- 00:06:42.519 [2024-07-15 19:01:22.864687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.519 [2024-07-15 19:01:22.864730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.519 NEW_FUNC[1/16]: 0x13ed9d0 in _nvmf_tcp_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:3476 00:06:42.519 NEW_FUNC[2/16]: 0x16bbac0 in nvme_ctrlr_queue_async_event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:3263 00:06:42.519 #53 NEW cov: 12386 ft: 14800 corp: 25/242b lim: 20 exec/s: 53 rss: 73Mb L: 18/19 MS: 1 InsertRepeatedBytes- 00:06:42.777 #54 NEW cov: 12386 ft: 14820 corp: 26/249b lim: 20 exec/s: 54 rss: 73Mb L: 7/19 MS: 1 CopyPart- 00:06:42.777 #55 NEW cov: 12386 ft: 14854 corp: 27/262b lim: 20 exec/s: 55 rss: 73Mb L: 13/19 MS: 1 CMP- DE: "\377\377\377\036"- 00:06:42.777 #56 NEW cov: 12386 ft: 14879 corp: 28/276b lim: 20 exec/s: 56 rss: 73Mb L: 14/19 MS: 1 InsertByte- 00:06:42.777 #57 NEW cov: 12386 ft: 14896 corp: 29/285b lim: 20 exec/s: 57 rss: 73Mb L: 9/19 MS: 1 CrossOver- 00:06:42.777 #58 NEW cov: 12386 ft: 14925 corp: 30/298b lim: 20 exec/s: 58 rss: 73Mb L: 13/19 MS: 1 ChangeBinInt- 00:06:43.048 #59 NEW cov: 12386 ft: 14961 corp: 31/308b lim: 20 exec/s: 59 rss: 74Mb L: 10/19 MS: 1 InsertByte- 00:06:43.048 #60 NEW cov: 12386 ft: 14968 corp: 32/317b lim: 20 exec/s: 30 rss: 74Mb L: 9/19 MS: 1 CopyPart- 00:06:43.048 #60 DONE cov: 12386 ft: 14968 corp: 32/317b lim: 20 exec/s: 30 rss: 74Mb 00:06:43.048 ###### Recommended dictionary. ###### 00:06:43.048 "\010\000\000\000" # Uses: 4 00:06:43.048 "\377\377\377\036" # Uses: 0 00:06:43.048 ###### End of recommended dictionary. ###### 00:06:43.048 Done 60 runs in 2 second(s) 00:06:43.048 19:01:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:06:43.048 19:01:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:43.048 19:01:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:43.048 19:01:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:06:43.048 19:01:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:06:43.048 19:01:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:43.048 19:01:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:43.048 19:01:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:43.048 19:01:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:06:43.048 19:01:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:43.048 19:01:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:43.048 19:01:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:06:43.048 19:01:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:06:43.048 19:01:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:43.048 19:01:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:06:43.048 19:01:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:43.048 19:01:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:43.048 19:01:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:43.048 19:01:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:06:43.305 [2024-07-15 19:01:23.476064] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:43.305 [2024-07-15 19:01:23.476148] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid670513 ] 00:06:43.305 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.305 [2024-07-15 19:01:23.683839] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.563 [2024-07-15 19:01:23.754717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.563 [2024-07-15 19:01:23.814577] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:43.563 [2024-07-15 19:01:23.830860] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:06:43.563 INFO: Running with entropic power schedule (0xFF, 100). 00:06:43.563 INFO: Seed: 64026666 00:06:43.563 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:43.563 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:43.563 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:43.563 INFO: A corpus is not provided, starting from an empty corpus 00:06:43.563 #2 INITED exec/s: 0 rss: 65Mb 00:06:43.563 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:43.563 This may also happen if the target rejected all inputs we tried so far 00:06:43.563 [2024-07-15 19:01:23.885773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.563 [2024-07-15 19:01:23.885810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.563 [2024-07-15 19:01:23.885845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.563 [2024-07-15 19:01:23.885861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.563 [2024-07-15 19:01:23.885891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.563 [2024-07-15 19:01:23.885907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.821 NEW_FUNC[1/696]: 0x489ff0 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:06:43.821 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:43.821 #7 NEW cov: 11906 ft: 11906 corp: 2/24b lim: 35 exec/s: 0 rss: 71Mb L: 23/23 MS: 5 ChangeByte-ChangeBinInt-CrossOver-ChangeBinInt-InsertRepeatedBytes- 00:06:44.079 [2024-07-15 19:01:24.266679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:a8ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.079 [2024-07-15 19:01:24.266729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.079 [2024-07-15 19:01:24.266763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.079 [2024-07-15 19:01:24.266779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.079 [2024-07-15 19:01:24.266808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.079 [2024-07-15 19:01:24.266824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.079 #8 NEW cov: 12036 ft: 12511 corp: 3/48b lim: 35 exec/s: 0 rss: 72Mb L: 24/24 MS: 1 InsertByte- 00:06:44.079 [2024-07-15 19:01:24.346752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.079 [2024-07-15 19:01:24.346787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.079 [2024-07-15 19:01:24.346835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.079 [2024-07-15 19:01:24.346851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.079 [2024-07-15 19:01:24.346880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.079 [2024-07-15 19:01:24.346896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.079 [2024-07-15 19:01:24.346924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:0000040a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.079 [2024-07-15 19:01:24.346939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.079 #9 NEW cov: 12042 ft: 13116 corp: 4/76b lim: 35 exec/s: 0 rss: 72Mb L: 28/28 MS: 1 InsertRepeatedBytes- 00:06:44.079 [2024-07-15 19:01:24.406952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.079 [2024-07-15 19:01:24.406984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.079 [2024-07-15 19:01:24.407016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.079 [2024-07-15 19:01:24.407032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.079 [2024-07-15 19:01:24.407061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.079 [2024-07-15 19:01:24.407077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.079 [2024-07-15 19:01:24.407105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:0000040a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.079 [2024-07-15 19:01:24.407124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.079 #10 NEW cov: 12127 ft: 13348 corp: 5/104b lim: 35 exec/s: 0 rss: 72Mb L: 28/28 MS: 1 ChangeByte- 00:06:44.079 [2024-07-15 19:01:24.487115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.079 [2024-07-15 19:01:24.487145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.079 [2024-07-15 19:01:24.487192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.079 [2024-07-15 19:01:24.487208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.079 [2024-07-15 19:01:24.487244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffff03ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.079 [2024-07-15 19:01:24.487260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.079 [2024-07-15 19:01:24.487288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:0000040a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.079 [2024-07-15 19:01:24.487304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.338 #11 NEW cov: 12127 ft: 13451 corp: 6/132b lim: 35 exec/s: 0 rss: 72Mb L: 28/28 MS: 1 ChangeBinInt- 00:06:44.338 [2024-07-15 19:01:24.547328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.338 [2024-07-15 19:01:24.547359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.338 [2024-07-15 19:01:24.547391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.338 [2024-07-15 19:01:24.547407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.338 [2024-07-15 19:01:24.547436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.338 [2024-07-15 19:01:24.547452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.338 [2024-07-15 19:01:24.547496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:0000040a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.338 [2024-07-15 19:01:24.547512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.338 #12 NEW cov: 12127 ft: 13535 corp: 7/160b lim: 35 exec/s: 0 rss: 72Mb L: 28/28 MS: 1 ShuffleBytes- 00:06:44.338 [2024-07-15 19:01:24.627524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.338 [2024-07-15 19:01:24.627554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.338 [2024-07-15 19:01:24.627603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:bfffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.338 [2024-07-15 19:01:24.627619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.338 [2024-07-15 19:01:24.627658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.338 [2024-07-15 19:01:24.627678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.338 [2024-07-15 19:01:24.627707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:0000040a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.338 [2024-07-15 19:01:24.627722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.338 #13 NEW cov: 12127 ft: 13601 corp: 8/188b lim: 35 exec/s: 0 rss: 72Mb L: 28/28 MS: 1 ChangeBit- 00:06:44.338 [2024-07-15 19:01:24.707536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:eaea1aea cdw11:eaea0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.338 [2024-07-15 19:01:24.707567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.338 #15 NEW cov: 12127 ft: 14377 corp: 9/200b lim: 35 exec/s: 0 rss: 72Mb L: 12/28 MS: 2 ChangeBit-InsertRepeatedBytes- 00:06:44.338 [2024-07-15 19:01:24.767868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.338 [2024-07-15 19:01:24.767917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.338 [2024-07-15 19:01:24.767951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.338 [2024-07-15 19:01:24.767967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.338 [2024-07-15 19:01:24.767997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffff03ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.338 [2024-07-15 19:01:24.768012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.338 [2024-07-15 19:01:24.768041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00ff040a cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.338 [2024-07-15 19:01:24.768057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.596 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:44.596 #16 NEW cov: 12144 ft: 14472 corp: 10/230b lim: 35 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 CopyPart- 00:06:44.596 [2024-07-15 19:01:24.848036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.596 [2024-07-15 19:01:24.848065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.596 [2024-07-15 19:01:24.848113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.596 [2024-07-15 19:01:24.848129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.596 [2024-07-15 19:01:24.848157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffff03ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.596 [2024-07-15 19:01:24.848172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.596 [2024-07-15 19:01:24.848201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:85850485 cdw11:850a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.596 [2024-07-15 19:01:24.848223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.597 #17 NEW cov: 12144 ft: 14538 corp: 11/264b lim: 35 exec/s: 17 rss: 72Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:06:44.597 [2024-07-15 19:01:24.928311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.597 [2024-07-15 19:01:24.928341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.597 [2024-07-15 19:01:24.928374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:bfffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.597 [2024-07-15 19:01:24.928390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.597 [2024-07-15 19:01:24.928419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:04ff0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.597 [2024-07-15 19:01:24.928434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.597 [2024-07-15 19:01:24.928462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:0000ff0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.597 [2024-07-15 19:01:24.928477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.597 #18 NEW cov: 12144 ft: 14578 corp: 12/292b lim: 35 exec/s: 18 rss: 72Mb L: 28/34 MS: 1 ShuffleBytes- 00:06:44.597 [2024-07-15 19:01:25.008408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.597 [2024-07-15 19:01:25.008439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.597 [2024-07-15 19:01:25.008471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.597 [2024-07-15 19:01:25.008488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.597 [2024-07-15 19:01:25.008517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:acac0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.597 [2024-07-15 19:01:25.008532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.854 #19 NEW cov: 12144 ft: 14631 corp: 13/319b lim: 35 exec/s: 19 rss: 72Mb L: 27/34 MS: 1 InsertRepeatedBytes- 00:06:44.854 [2024-07-15 19:01:25.058495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.854 [2024-07-15 19:01:25.058524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.854 [2024-07-15 19:01:25.058572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.854 [2024-07-15 19:01:25.058588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.854 [2024-07-15 19:01:25.058617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:acac0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.854 [2024-07-15 19:01:25.058632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.854 #20 NEW cov: 12144 ft: 14745 corp: 14/346b lim: 35 exec/s: 20 rss: 73Mb L: 27/34 MS: 1 ShuffleBytes- 00:06:44.854 [2024-07-15 19:01:25.138768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:a8ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.854 [2024-07-15 19:01:25.138804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.854 [2024-07-15 19:01:25.138838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.854 [2024-07-15 19:01:25.138854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.854 [2024-07-15 19:01:25.138884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.855 [2024-07-15 19:01:25.138900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.855 #21 NEW cov: 12144 ft: 14752 corp: 15/368b lim: 35 exec/s: 21 rss: 73Mb L: 22/34 MS: 1 EraseBytes- 00:06:44.855 [2024-07-15 19:01:25.219076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.855 [2024-07-15 19:01:25.219109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.855 [2024-07-15 19:01:25.219142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:13131313 cdw11:13130003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.855 [2024-07-15 19:01:25.219159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.855 [2024-07-15 19:01:25.219189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.855 [2024-07-15 19:01:25.219204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.855 [2024-07-15 19:01:25.219239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.855 [2024-07-15 19:01:25.219256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.855 [2024-07-15 19:01:25.219284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:0000040a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.855 [2024-07-15 19:01:25.219299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:44.855 #22 NEW cov: 12144 ft: 14878 corp: 16/403b lim: 35 exec/s: 22 rss: 73Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:06:44.855 [2024-07-15 19:01:25.279182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.855 [2024-07-15 19:01:25.279215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.855 [2024-07-15 19:01:25.279258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.855 [2024-07-15 19:01:25.279274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.855 [2024-07-15 19:01:25.279304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.855 [2024-07-15 19:01:25.279320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.855 [2024-07-15 19:01:25.279348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:85850485 cdw11:850a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.855 [2024-07-15 19:01:25.279364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.124 #23 NEW cov: 12144 ft: 14944 corp: 17/437b lim: 35 exec/s: 23 rss: 73Mb L: 34/35 MS: 1 CopyPart- 00:06:45.124 [2024-07-15 19:01:25.359319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:a8020003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.124 [2024-07-15 19:01:25.359350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.124 [2024-07-15 19:01:25.359397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.124 [2024-07-15 19:01:25.359413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.124 [2024-07-15 19:01:25.359442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.124 [2024-07-15 19:01:25.359457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.124 #24 NEW cov: 12144 ft: 14954 corp: 18/462b lim: 35 exec/s: 24 rss: 73Mb L: 25/35 MS: 1 InsertByte- 00:06:45.124 [2024-07-15 19:01:25.409403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:a8020003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.124 [2024-07-15 19:01:25.409434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.124 [2024-07-15 19:01:25.409481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.124 [2024-07-15 19:01:25.409497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.124 [2024-07-15 19:01:25.409525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.124 [2024-07-15 19:01:25.409541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.124 #25 NEW cov: 12144 ft: 14967 corp: 19/488b lim: 35 exec/s: 25 rss: 73Mb L: 26/35 MS: 1 InsertByte- 00:06:45.124 [2024-07-15 19:01:25.489539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.124 [2024-07-15 19:01:25.489569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.124 [2024-07-15 19:01:25.489616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ff0affff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.124 [2024-07-15 19:01:25.489633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.124 #26 NEW cov: 12144 ft: 15182 corp: 20/504b lim: 35 exec/s: 26 rss: 73Mb L: 16/35 MS: 1 EraseBytes- 00:06:45.124 [2024-07-15 19:01:25.549897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.124 [2024-07-15 19:01:25.549929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.124 [2024-07-15 19:01:25.549962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.124 [2024-07-15 19:01:25.549979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.124 [2024-07-15 19:01:25.550009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.124 [2024-07-15 19:01:25.550029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.124 [2024-07-15 19:01:25.550058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2000040a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.124 [2024-07-15 19:01:25.550074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.384 #27 NEW cov: 12144 ft: 15191 corp: 21/532b lim: 35 exec/s: 27 rss: 73Mb L: 28/35 MS: 1 ChangeBit- 00:06:45.384 [2024-07-15 19:01:25.600025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.384 [2024-07-15 19:01:25.600055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.384 [2024-07-15 19:01:25.600103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:13131313 cdw11:13130003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.384 [2024-07-15 19:01:25.600119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.384 [2024-07-15 19:01:25.600147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.384 [2024-07-15 19:01:25.600163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.384 [2024-07-15 19:01:25.600191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:06ff0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.384 [2024-07-15 19:01:25.600206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.384 [2024-07-15 19:01:25.600241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:0000040a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.384 [2024-07-15 19:01:25.600257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:45.384 #28 NEW cov: 12144 ft: 15199 corp: 22/567b lim: 35 exec/s: 28 rss: 73Mb L: 35/35 MS: 1 ChangeBinInt- 00:06:45.384 [2024-07-15 19:01:25.680066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.384 [2024-07-15 19:01:25.680097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.384 [2024-07-15 19:01:25.680129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.384 [2024-07-15 19:01:25.680145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.384 #29 NEW cov: 12144 ft: 15251 corp: 23/586b lim: 35 exec/s: 29 rss: 73Mb L: 19/35 MS: 1 EraseBytes- 00:06:45.384 [2024-07-15 19:01:25.740333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.384 [2024-07-15 19:01:25.740363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.384 [2024-07-15 19:01:25.740396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.384 [2024-07-15 19:01:25.740412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.384 [2024-07-15 19:01:25.740441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffff03ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.384 [2024-07-15 19:01:25.740460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.384 [2024-07-15 19:01:25.740489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:0000fc0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.384 [2024-07-15 19:01:25.740504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.384 #30 NEW cov: 12151 ft: 15298 corp: 24/614b lim: 35 exec/s: 30 rss: 73Mb L: 28/35 MS: 1 ChangeBinInt- 00:06:45.384 [2024-07-15 19:01:25.790422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:a8020003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.384 [2024-07-15 19:01:25.790453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.384 [2024-07-15 19:01:25.790500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.384 [2024-07-15 19:01:25.790516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.384 [2024-07-15 19:01:25.790545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f7ffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.384 [2024-07-15 19:01:25.790561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.641 #31 NEW cov: 12151 ft: 15312 corp: 25/640b lim: 35 exec/s: 31 rss: 73Mb L: 26/35 MS: 1 ChangeBit- 00:06:45.641 [2024-07-15 19:01:25.870628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:a8020003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.641 [2024-07-15 19:01:25.870658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.641 [2024-07-15 19:01:25.870705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.642 [2024-07-15 19:01:25.870720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.642 [2024-07-15 19:01:25.870749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f7ffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.642 [2024-07-15 19:01:25.870765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.642 #32 pulse cov: 12151 ft: 15326 corp: 25/640b lim: 35 exec/s: 16 rss: 73Mb 00:06:45.642 #32 NEW cov: 12151 ft: 15326 corp: 26/666b lim: 35 exec/s: 16 rss: 73Mb L: 26/35 MS: 1 ShuffleBytes- 00:06:45.642 #32 DONE cov: 12151 ft: 15326 corp: 26/666b lim: 35 exec/s: 16 rss: 73Mb 00:06:45.642 Done 32 runs in 2 second(s) 00:06:45.642 19:01:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:06:45.642 19:01:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:45.642 19:01:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:45.899 19:01:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:06:45.899 19:01:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:06:45.899 19:01:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:45.899 19:01:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:45.899 19:01:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:45.899 19:01:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:06:45.899 19:01:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:45.899 19:01:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:45.899 19:01:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:06:45.899 19:01:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:06:45.899 19:01:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:45.900 19:01:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:06:45.900 19:01:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:45.900 19:01:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:45.900 19:01:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:45.900 19:01:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:06:45.900 [2024-07-15 19:01:26.115267] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:45.900 [2024-07-15 19:01:26.115338] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid670882 ] 00:06:45.900 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.900 [2024-07-15 19:01:26.323315] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.158 [2024-07-15 19:01:26.395641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.158 [2024-07-15 19:01:26.455167] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.158 [2024-07-15 19:01:26.471459] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:06:46.158 INFO: Running with entropic power schedule (0xFF, 100). 00:06:46.158 INFO: Seed: 2705001473 00:06:46.158 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:46.158 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:46.158 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:46.158 INFO: A corpus is not provided, starting from an empty corpus 00:06:46.158 #2 INITED exec/s: 0 rss: 64Mb 00:06:46.158 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:46.158 This may also happen if the target rejected all inputs we tried so far 00:06:46.158 [2024-07-15 19:01:26.516235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.158 [2024-07-15 19:01:26.516270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.158 [2024-07-15 19:01:26.516318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.159 [2024-07-15 19:01:26.516334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.159 [2024-07-15 19:01:26.516363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.159 [2024-07-15 19:01:26.516379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.740 NEW_FUNC[1/696]: 0x48c180 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:06:46.740 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:46.740 #10 NEW cov: 11917 ft: 11914 corp: 2/31b lim: 45 exec/s: 0 rss: 72Mb L: 30/30 MS: 3 ChangeByte-CopyPart-InsertRepeatedBytes- 00:06:46.740 [2024-07-15 19:01:26.897313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:77770b77 cdw11:77770003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.740 [2024-07-15 19:01:26.897359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.740 [2024-07-15 19:01:26.897395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:77777777 cdw11:77770003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.740 [2024-07-15 19:01:26.897411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.740 [2024-07-15 19:01:26.897440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:77777777 cdw11:77770003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.740 [2024-07-15 19:01:26.897455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.740 [2024-07-15 19:01:26.897484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:77777777 cdw11:77770003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.740 [2024-07-15 19:01:26.897499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.740 #13 NEW cov: 12047 ft: 12692 corp: 3/69b lim: 45 exec/s: 0 rss: 72Mb L: 38/38 MS: 3 ChangeBit-CopyPart-InsertRepeatedBytes- 00:06:46.740 [2024-07-15 19:01:26.957338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:77770b77 cdw11:77770003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.740 [2024-07-15 19:01:26.957370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.740 [2024-07-15 19:01:26.957418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:77777777 cdw11:77770003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.740 [2024-07-15 19:01:26.957433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.740 [2024-07-15 19:01:26.957462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:77777777 cdw11:77770003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.740 [2024-07-15 19:01:26.957478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.740 [2024-07-15 19:01:26.957507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:77777777 cdw11:77770003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.740 [2024-07-15 19:01:26.957522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.740 #19 NEW cov: 12053 ft: 12879 corp: 4/108b lim: 45 exec/s: 0 rss: 72Mb L: 39/39 MS: 1 InsertByte- 00:06:46.740 [2024-07-15 19:01:27.037389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.740 [2024-07-15 19:01:27.037419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.740 [2024-07-15 19:01:27.037467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.740 [2024-07-15 19:01:27.037483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.740 [2024-07-15 19:01:27.037512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff4a0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.740 [2024-07-15 19:01:27.037528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.740 #20 NEW cov: 12138 ft: 13189 corp: 5/139b lim: 45 exec/s: 0 rss: 72Mb L: 31/39 MS: 1 InsertByte- 00:06:46.740 [2024-07-15 19:01:27.117664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.740 [2024-07-15 19:01:27.117694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.740 [2024-07-15 19:01:27.117741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.740 [2024-07-15 19:01:27.117757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.740 [2024-07-15 19:01:27.117786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:09ffffff cdw11:ff4a0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.740 [2024-07-15 19:01:27.117802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.003 #21 NEW cov: 12138 ft: 13419 corp: 6/170b lim: 45 exec/s: 0 rss: 72Mb L: 31/39 MS: 1 ChangeBinInt- 00:06:47.003 [2024-07-15 19:01:27.197824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.003 [2024-07-15 19:01:27.197855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.003 [2024-07-15 19:01:27.197902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.003 [2024-07-15 19:01:27.197918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.003 [2024-07-15 19:01:27.197947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:77770003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.003 [2024-07-15 19:01:27.197962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.003 #22 NEW cov: 12138 ft: 13489 corp: 7/200b lim: 45 exec/s: 0 rss: 72Mb L: 30/39 MS: 1 CrossOver- 00:06:47.003 [2024-07-15 19:01:27.247975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.003 [2024-07-15 19:01:27.248005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.003 [2024-07-15 19:01:27.248052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.003 [2024-07-15 19:01:27.248068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.003 [2024-07-15 19:01:27.248097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.003 [2024-07-15 19:01:27.248113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.003 #23 NEW cov: 12138 ft: 13570 corp: 8/230b lim: 45 exec/s: 0 rss: 72Mb L: 30/39 MS: 1 ChangeBinInt- 00:06:47.003 [2024-07-15 19:01:27.298120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.003 [2024-07-15 19:01:27.298149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.003 [2024-07-15 19:01:27.298182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.003 [2024-07-15 19:01:27.298203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.003 [2024-07-15 19:01:27.298239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.003 [2024-07-15 19:01:27.298255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.003 #24 NEW cov: 12138 ft: 13633 corp: 9/260b lim: 45 exec/s: 0 rss: 72Mb L: 30/39 MS: 1 ShuffleBytes- 00:06:47.003 [2024-07-15 19:01:27.378444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:77770b77 cdw11:77770003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.003 [2024-07-15 19:01:27.378476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.003 [2024-07-15 19:01:27.378510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:77777777 cdw11:77770003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.003 [2024-07-15 19:01:27.378527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.003 [2024-07-15 19:01:27.378556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:77777777 cdw11:77770003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.003 [2024-07-15 19:01:27.378571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.003 [2024-07-15 19:01:27.378601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.003 [2024-07-15 19:01:27.378616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.003 [2024-07-15 19:01:27.378645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:77777777 cdw11:77770003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.003 [2024-07-15 19:01:27.378661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:47.262 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:47.262 #25 NEW cov: 12161 ft: 13772 corp: 10/305b lim: 45 exec/s: 0 rss: 72Mb L: 45/45 MS: 1 InsertRepeatedBytes- 00:06:47.262 [2024-07-15 19:01:27.458467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:77770b77 cdw11:77770003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.262 [2024-07-15 19:01:27.458498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.262 [2024-07-15 19:01:27.458546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:77777777 cdw11:77770003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.262 [2024-07-15 19:01:27.458562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.262 [2024-07-15 19:01:27.458591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:77777777 cdw11:77770003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.262 [2024-07-15 19:01:27.458606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.262 #26 NEW cov: 12161 ft: 13841 corp: 11/336b lim: 45 exec/s: 0 rss: 72Mb L: 31/45 MS: 1 EraseBytes- 00:06:47.262 [2024-07-15 19:01:27.518788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:77770b77 cdw11:77770003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.262 [2024-07-15 19:01:27.518820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.262 [2024-07-15 19:01:27.518853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:77777777 cdw11:77770007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.262 [2024-07-15 19:01:27.518873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.262 [2024-07-15 19:01:27.518902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:77777777 cdw11:77770003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.262 [2024-07-15 19:01:27.518917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.263 [2024-07-15 19:01:27.518946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.263 [2024-07-15 19:01:27.518961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.263 [2024-07-15 19:01:27.518989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:77777777 cdw11:77770003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.263 [2024-07-15 19:01:27.519004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:47.263 #27 NEW cov: 12161 ft: 13866 corp: 12/381b lim: 45 exec/s: 27 rss: 73Mb L: 45/45 MS: 1 CopyPart- 00:06:47.263 [2024-07-15 19:01:27.598964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:77770b77 cdw11:77770003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.263 [2024-07-15 19:01:27.598994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.263 [2024-07-15 19:01:27.599027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:77777777 cdw11:77770003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.263 [2024-07-15 19:01:27.599043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.263 [2024-07-15 19:01:27.599073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:77777777 cdw11:77770003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.263 [2024-07-15 19:01:27.599088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.263 [2024-07-15 19:01:27.599116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:77777777 cdw11:77770003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.263 [2024-07-15 19:01:27.599131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.263 #28 NEW cov: 12161 ft: 13884 corp: 13/420b lim: 45 exec/s: 28 rss: 73Mb L: 39/45 MS: 1 CopyPart- 00:06:47.263 [2024-07-15 19:01:27.648965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.263 [2024-07-15 19:01:27.648996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.263 [2024-07-15 19:01:27.649029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.263 [2024-07-15 19:01:27.649045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.263 [2024-07-15 19:01:27.649074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:fffeffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.263 [2024-07-15 19:01:27.649090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.263 #29 NEW cov: 12161 ft: 13942 corp: 14/450b lim: 45 exec/s: 29 rss: 73Mb L: 30/45 MS: 1 ChangeBit- 00:06:47.521 [2024-07-15 19:01:27.709160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.521 [2024-07-15 19:01:27.709194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.521 [2024-07-15 19:01:27.709249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.521 [2024-07-15 19:01:27.709265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.521 [2024-07-15 19:01:27.709294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.521 [2024-07-15 19:01:27.709309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.521 [2024-07-15 19:01:27.709337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ff4a09ff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.521 [2024-07-15 19:01:27.709353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.521 #30 NEW cov: 12161 ft: 14009 corp: 15/488b lim: 45 exec/s: 30 rss: 73Mb L: 38/45 MS: 1 CopyPart- 00:06:47.521 [2024-07-15 19:01:27.789433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:77770b77 cdw11:77770003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.521 [2024-07-15 19:01:27.789464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.521 [2024-07-15 19:01:27.789497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:77777777 cdw11:77770003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.521 [2024-07-15 19:01:27.789514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.521 [2024-07-15 19:01:27.789543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:77777777 cdw11:77770003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.521 [2024-07-15 19:01:27.789558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.521 [2024-07-15 19:01:27.789586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:77777777 cdw11:77770003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.521 [2024-07-15 19:01:27.789601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.521 #31 NEW cov: 12161 ft: 14025 corp: 16/524b lim: 45 exec/s: 31 rss: 73Mb L: 36/45 MS: 1 EraseBytes- 00:06:47.521 [2024-07-15 19:01:27.839480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.521 [2024-07-15 19:01:27.839511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.521 [2024-07-15 19:01:27.839558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.521 [2024-07-15 19:01:27.839574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.521 [2024-07-15 19:01:27.839603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:000c0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.521 [2024-07-15 19:01:27.839618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.521 #32 NEW cov: 12161 ft: 14038 corp: 17/556b lim: 45 exec/s: 32 rss: 73Mb L: 32/45 MS: 1 CMP- DE: "\000\014"- 00:06:47.521 [2024-07-15 19:01:27.889635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:77770b77 cdw11:77870005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.521 [2024-07-15 19:01:27.889670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.521 [2024-07-15 19:01:27.889703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:13005838 cdw11:77770003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.521 [2024-07-15 19:01:27.889719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.522 [2024-07-15 19:01:27.889747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:77777777 cdw11:77770003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.522 [2024-07-15 19:01:27.889762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.780 #33 NEW cov: 12161 ft: 14060 corp: 18/587b lim: 45 exec/s: 33 rss: 73Mb L: 31/45 MS: 1 CMP- DE: "\207\2702\364X8\023\000"- 00:06:47.780 [2024-07-15 19:01:27.969739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.780 [2024-07-15 19:01:27.969770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.780 [2024-07-15 19:01:27.969817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.780 [2024-07-15 19:01:27.969832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.780 #34 NEW cov: 12161 ft: 14305 corp: 19/608b lim: 45 exec/s: 34 rss: 73Mb L: 21/45 MS: 1 EraseBytes- 00:06:47.780 [2024-07-15 19:01:28.050028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.780 [2024-07-15 19:01:28.050060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.780 [2024-07-15 19:01:28.050107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.780 [2024-07-15 19:01:28.050123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.780 [2024-07-15 19:01:28.050152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.780 [2024-07-15 19:01:28.050167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.780 #35 NEW cov: 12161 ft: 14323 corp: 20/638b lim: 45 exec/s: 35 rss: 73Mb L: 30/45 MS: 1 ShuffleBytes- 00:06:47.780 [2024-07-15 19:01:28.100128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.780 [2024-07-15 19:01:28.100158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.780 [2024-07-15 19:01:28.100206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.780 [2024-07-15 19:01:28.100229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.780 [2024-07-15 19:01:28.100259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:000c0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.780 [2024-07-15 19:01:28.100275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.780 #36 NEW cov: 12161 ft: 14350 corp: 21/670b lim: 45 exec/s: 36 rss: 73Mb L: 32/45 MS: 1 CrossOver- 00:06:47.780 [2024-07-15 19:01:28.180342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:42ffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.780 [2024-07-15 19:01:28.180373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.780 [2024-07-15 19:01:28.180420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.780 [2024-07-15 19:01:28.180436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.780 [2024-07-15 19:01:28.180465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.780 [2024-07-15 19:01:28.180480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.039 #37 NEW cov: 12161 ft: 14432 corp: 22/702b lim: 45 exec/s: 37 rss: 73Mb L: 32/45 MS: 1 InsertByte- 00:06:48.039 [2024-07-15 19:01:28.230475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.039 [2024-07-15 19:01:28.230508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.039 [2024-07-15 19:01:28.230555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.039 [2024-07-15 19:01:28.230571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.039 [2024-07-15 19:01:28.230599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.039 [2024-07-15 19:01:28.230615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.039 #38 NEW cov: 12161 ft: 14440 corp: 23/732b lim: 45 exec/s: 38 rss: 73Mb L: 30/45 MS: 1 ChangeByte- 00:06:48.039 [2024-07-15 19:01:28.280670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.039 [2024-07-15 19:01:28.280703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.039 [2024-07-15 19:01:28.280736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.039 [2024-07-15 19:01:28.280752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.039 [2024-07-15 19:01:28.280782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.039 [2024-07-15 19:01:28.280797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.039 [2024-07-15 19:01:28.280825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ff4a09ff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.039 [2024-07-15 19:01:28.280840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.040 #39 NEW cov: 12161 ft: 14465 corp: 24/770b lim: 45 exec/s: 39 rss: 73Mb L: 38/45 MS: 1 ChangeBinInt- 00:06:48.040 [2024-07-15 19:01:28.360681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.040 [2024-07-15 19:01:28.360712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.040 #40 NEW cov: 12161 ft: 15214 corp: 25/784b lim: 45 exec/s: 40 rss: 73Mb L: 14/45 MS: 1 EraseBytes- 00:06:48.040 [2024-07-15 19:01:28.440934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:77770b77 cdw11:77770003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.040 [2024-07-15 19:01:28.440964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.040 [2024-07-15 19:01:28.441012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:7777ff3b cdw11:77770003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.040 [2024-07-15 19:01:28.441028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.299 #41 NEW cov: 12161 ft: 15230 corp: 26/803b lim: 45 exec/s: 20 rss: 73Mb L: 19/45 MS: 1 CrossOver- 00:06:48.299 #41 DONE cov: 12161 ft: 15230 corp: 26/803b lim: 45 exec/s: 20 rss: 73Mb 00:06:48.299 ###### Recommended dictionary. ###### 00:06:48.299 "\000\014" # Uses: 0 00:06:48.299 "\207\2702\364X8\023\000" # Uses: 0 00:06:48.299 ###### End of recommended dictionary. ###### 00:06:48.299 Done 41 runs in 2 second(s) 00:06:48.299 19:01:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:06:48.299 19:01:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:48.299 19:01:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:48.299 19:01:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:06:48.299 19:01:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:06:48.299 19:01:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:48.299 19:01:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:48.299 19:01:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:48.299 19:01:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:06:48.299 19:01:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:48.299 19:01:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:48.299 19:01:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:06:48.299 19:01:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:06:48.299 19:01:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:48.299 19:01:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:06:48.299 19:01:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:48.299 19:01:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:48.299 19:01:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:48.299 19:01:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:06:48.299 [2024-07-15 19:01:28.686540] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:48.299 [2024-07-15 19:01:28.686610] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid671250 ] 00:06:48.299 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.560 [2024-07-15 19:01:28.897612] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.560 [2024-07-15 19:01:28.968320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.865 [2024-07-15 19:01:29.028697] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:48.866 [2024-07-15 19:01:29.044978] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:06:48.866 INFO: Running with entropic power schedule (0xFF, 100). 00:06:48.866 INFO: Seed: 981048047 00:06:48.866 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:48.866 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:48.866 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:48.866 INFO: A corpus is not provided, starting from an empty corpus 00:06:48.866 #2 INITED exec/s: 0 rss: 64Mb 00:06:48.866 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:48.866 This may also happen if the target rejected all inputs we tried so far 00:06:48.866 [2024-07-15 19:01:29.103751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000afe cdw11:00000000 00:06:48.866 [2024-07-15 19:01:29.103780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.866 [2024-07-15 19:01:29.103832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:48.866 [2024-07-15 19:01:29.103846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.182 NEW_FUNC[1/694]: 0x48e990 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:06:49.182 NEW_FUNC[2/694]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:49.182 #3 NEW cov: 11834 ft: 11835 corp: 2/6b lim: 10 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 CMP- DE: "\376\377\377\377"- 00:06:49.182 [2024-07-15 19:01:29.445050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00009ffe cdw11:00000000 00:06:49.182 [2024-07-15 19:01:29.445121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.182 [2024-07-15 19:01:29.445229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:49.182 [2024-07-15 19:01:29.445263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.182 #4 NEW cov: 11964 ft: 12560 corp: 3/11b lim: 10 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 ChangeByte- 00:06:49.182 [2024-07-15 19:01:29.504811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000feff cdw11:00000000 00:06:49.182 [2024-07-15 19:01:29.504840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.182 [2024-07-15 19:01:29.504896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:49.182 [2024-07-15 19:01:29.504910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.182 #5 NEW cov: 11970 ft: 12869 corp: 4/15b lim: 10 exec/s: 0 rss: 72Mb L: 4/5 MS: 1 EraseBytes- 00:06:49.182 [2024-07-15 19:01:29.544779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000280a cdw11:00000000 00:06:49.182 [2024-07-15 19:01:29.544807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.182 #6 NEW cov: 12055 ft: 13342 corp: 5/17b lim: 10 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 InsertByte- 00:06:49.182 [2024-07-15 19:01:29.585176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000280a cdw11:00000000 00:06:49.182 [2024-07-15 19:01:29.585205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.182 [2024-07-15 19:01:29.585271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:49.182 [2024-07-15 19:01:29.585291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.182 [2024-07-15 19:01:29.585350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:49.182 [2024-07-15 19:01:29.585365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.440 #7 NEW cov: 12055 ft: 13655 corp: 6/24b lim: 10 exec/s: 0 rss: 72Mb L: 7/7 MS: 1 InsertRepeatedBytes- 00:06:49.440 [2024-07-15 19:01:29.635189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000fe6e cdw11:00000000 00:06:49.440 [2024-07-15 19:01:29.635221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.440 [2024-07-15 19:01:29.635277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:49.440 [2024-07-15 19:01:29.635292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.440 #8 NEW cov: 12055 ft: 13721 corp: 7/28b lim: 10 exec/s: 0 rss: 72Mb L: 4/7 MS: 1 ChangeByte- 00:06:49.440 [2024-07-15 19:01:29.685470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000280a cdw11:00000000 00:06:49.440 [2024-07-15 19:01:29.685496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.440 [2024-07-15 19:01:29.685552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:49.440 [2024-07-15 19:01:29.685565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.440 [2024-07-15 19:01:29.685618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000065ff cdw11:00000000 00:06:49.440 [2024-07-15 19:01:29.685632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.440 #9 NEW cov: 12055 ft: 13822 corp: 8/35b lim: 10 exec/s: 0 rss: 72Mb L: 7/7 MS: 1 ChangeByte- 00:06:49.440 [2024-07-15 19:01:29.735319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000280a cdw11:00000000 00:06:49.440 [2024-07-15 19:01:29.735346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.440 #10 NEW cov: 12055 ft: 13837 corp: 9/37b lim: 10 exec/s: 0 rss: 72Mb L: 2/7 MS: 1 ShuffleBytes- 00:06:49.440 [2024-07-15 19:01:29.775415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a0a cdw11:00000000 00:06:49.440 [2024-07-15 19:01:29.775440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.440 #11 NEW cov: 12055 ft: 13947 corp: 10/39b lim: 10 exec/s: 0 rss: 72Mb L: 2/7 MS: 1 ChangeBit- 00:06:49.440 [2024-07-15 19:01:29.815901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:49.440 [2024-07-15 19:01:29.815927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.440 [2024-07-15 19:01:29.815985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff0a cdw11:00000000 00:06:49.440 [2024-07-15 19:01:29.816000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.440 [2024-07-15 19:01:29.816056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000feff cdw11:00000000 00:06:49.440 [2024-07-15 19:01:29.816069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.440 [2024-07-15 19:01:29.816125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:49.440 [2024-07-15 19:01:29.816139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.441 #12 NEW cov: 12055 ft: 14202 corp: 11/47b lim: 10 exec/s: 0 rss: 72Mb L: 8/8 MS: 1 CopyPart- 00:06:49.441 [2024-07-15 19:01:29.856190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000afe cdw11:00000000 00:06:49.441 [2024-07-15 19:01:29.856222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.441 [2024-07-15 19:01:29.856279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:49.441 [2024-07-15 19:01:29.856292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.441 [2024-07-15 19:01:29.856348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff57 cdw11:00000000 00:06:49.441 [2024-07-15 19:01:29.856362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.441 [2024-07-15 19:01:29.856416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00005757 cdw11:00000000 00:06:49.441 [2024-07-15 19:01:29.856429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.441 [2024-07-15 19:01:29.856486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00005757 cdw11:00000000 00:06:49.441 [2024-07-15 19:01:29.856500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:49.700 #13 NEW cov: 12055 ft: 14272 corp: 12/57b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:06:49.700 [2024-07-15 19:01:29.896279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000afe cdw11:00000000 00:06:49.700 [2024-07-15 19:01:29.896305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.700 [2024-07-15 19:01:29.896360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:49.700 [2024-07-15 19:01:29.896374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.700 [2024-07-15 19:01:29.896430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff57 cdw11:00000000 00:06:49.700 [2024-07-15 19:01:29.896442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.700 [2024-07-15 19:01:29.896499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000a457 cdw11:00000000 00:06:49.700 [2024-07-15 19:01:29.896512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.700 [2024-07-15 19:01:29.896565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00005757 cdw11:00000000 00:06:49.700 [2024-07-15 19:01:29.896579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:49.700 #14 NEW cov: 12055 ft: 14366 corp: 13/67b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 ChangeBinInt- 00:06:49.700 [2024-07-15 19:01:29.945901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000280a cdw11:00000000 00:06:49.700 [2024-07-15 19:01:29.945926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.700 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:49.700 #15 NEW cov: 12078 ft: 14409 corp: 14/70b lim: 10 exec/s: 0 rss: 73Mb L: 3/10 MS: 1 CopyPart- 00:06:49.700 [2024-07-15 19:01:29.996412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000fe6e cdw11:00000000 00:06:49.700 [2024-07-15 19:01:29.996437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.700 [2024-07-15 19:01:29.996491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffe6 cdw11:00000000 00:06:49.700 [2024-07-15 19:01:29.996506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.700 [2024-07-15 19:01:29.996578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000e6e6 cdw11:00000000 00:06:49.700 [2024-07-15 19:01:29.996591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.700 [2024-07-15 19:01:29.996647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000e6e6 cdw11:00000000 00:06:49.700 [2024-07-15 19:01:29.996661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.700 #16 NEW cov: 12078 ft: 14437 corp: 15/79b lim: 10 exec/s: 0 rss: 73Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:06:49.700 [2024-07-15 19:01:30.056241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000028c1 cdw11:00000000 00:06:49.700 [2024-07-15 19:01:30.056275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.700 #17 NEW cov: 12078 ft: 14486 corp: 16/82b lim: 10 exec/s: 17 rss: 73Mb L: 3/10 MS: 1 ChangeByte- 00:06:49.700 [2024-07-15 19:01:30.106878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000afe cdw11:00000000 00:06:49.700 [2024-07-15 19:01:30.106911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.700 [2024-07-15 19:01:30.106969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff19 cdw11:00000000 00:06:49.700 [2024-07-15 19:01:30.106983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.700 [2024-07-15 19:01:30.107041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff57 cdw11:00000000 00:06:49.700 [2024-07-15 19:01:30.107055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.700 [2024-07-15 19:01:30.107111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000a457 cdw11:00000000 00:06:49.700 [2024-07-15 19:01:30.107124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.700 [2024-07-15 19:01:30.107180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00005757 cdw11:00000000 00:06:49.700 [2024-07-15 19:01:30.107194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:49.993 #18 NEW cov: 12078 ft: 14499 corp: 17/92b lim: 10 exec/s: 18 rss: 73Mb L: 10/10 MS: 1 ChangeByte- 00:06:49.993 [2024-07-15 19:01:30.157021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000afe cdw11:00000000 00:06:49.993 [2024-07-15 19:01:30.157050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.993 [2024-07-15 19:01:30.157107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:49.993 [2024-07-15 19:01:30.157121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.993 [2024-07-15 19:01:30.157176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000fffe cdw11:00000000 00:06:49.993 [2024-07-15 19:01:30.157190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.993 [2024-07-15 19:01:30.157250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:49.993 [2024-07-15 19:01:30.157264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.993 [2024-07-15 19:01:30.157317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ff57 cdw11:00000000 00:06:49.993 [2024-07-15 19:01:30.157331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:49.993 #19 NEW cov: 12078 ft: 14508 corp: 18/102b lim: 10 exec/s: 19 rss: 73Mb L: 10/10 MS: 1 CopyPart- 00:06:49.993 [2024-07-15 19:01:30.196719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000fe6e cdw11:00000000 00:06:49.993 [2024-07-15 19:01:30.196746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.993 [2024-07-15 19:01:30.196802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:49.993 [2024-07-15 19:01:30.196815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.993 #20 NEW cov: 12078 ft: 14567 corp: 19/106b lim: 10 exec/s: 20 rss: 73Mb L: 4/10 MS: 1 CopyPart- 00:06:49.993 [2024-07-15 19:01:30.236953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000028fe cdw11:00000000 00:06:49.993 [2024-07-15 19:01:30.236980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.993 [2024-07-15 19:01:30.237039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:49.993 [2024-07-15 19:01:30.237052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.993 [2024-07-15 19:01:30.237106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff0a cdw11:00000000 00:06:49.993 [2024-07-15 19:01:30.237120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.993 #21 NEW cov: 12078 ft: 14691 corp: 20/112b lim: 10 exec/s: 21 rss: 73Mb L: 6/10 MS: 1 PersAutoDict- DE: "\376\377\377\377"- 00:06:49.993 [2024-07-15 19:01:30.276854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002831 cdw11:00000000 00:06:49.993 [2024-07-15 19:01:30.276881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.993 #22 NEW cov: 12078 ft: 14741 corp: 21/114b lim: 10 exec/s: 22 rss: 73Mb L: 2/10 MS: 1 ChangeByte- 00:06:49.993 [2024-07-15 19:01:30.317080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000280a cdw11:00000000 00:06:49.993 [2024-07-15 19:01:30.317107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.993 [2024-07-15 19:01:30.317178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:49.993 [2024-07-15 19:01:30.317193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.993 #23 NEW cov: 12078 ft: 14758 corp: 22/119b lim: 10 exec/s: 23 rss: 73Mb L: 5/10 MS: 1 EraseBytes- 00:06:49.993 [2024-07-15 19:01:30.357592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a2e cdw11:00000000 00:06:49.993 [2024-07-15 19:01:30.357622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.993 [2024-07-15 19:01:30.357680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:49.993 [2024-07-15 19:01:30.357694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.993 [2024-07-15 19:01:30.357749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff57 cdw11:00000000 00:06:49.993 [2024-07-15 19:01:30.357762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.993 [2024-07-15 19:01:30.357815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00005757 cdw11:00000000 00:06:49.993 [2024-07-15 19:01:30.357829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.993 [2024-07-15 19:01:30.357883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00005757 cdw11:00000000 00:06:49.993 [2024-07-15 19:01:30.357896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:49.993 #24 NEW cov: 12078 ft: 14764 corp: 23/129b lim: 10 exec/s: 24 rss: 73Mb L: 10/10 MS: 1 ChangeByte- 00:06:49.993 [2024-07-15 19:01:30.397134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a28 cdw11:00000000 00:06:49.993 [2024-07-15 19:01:30.397160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.993 #25 NEW cov: 12078 ft: 14783 corp: 24/132b lim: 10 exec/s: 25 rss: 73Mb L: 3/10 MS: 1 CopyPart- 00:06:50.251 [2024-07-15 19:01:30.437411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00006701 cdw11:00000000 00:06:50.251 [2024-07-15 19:01:30.437437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.251 [2024-07-15 19:01:30.437495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:50.251 [2024-07-15 19:01:30.437509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.251 #26 NEW cov: 12078 ft: 14803 corp: 25/137b lim: 10 exec/s: 26 rss: 73Mb L: 5/10 MS: 1 ChangeBinInt- 00:06:50.251 [2024-07-15 19:01:30.487440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002828 cdw11:00000000 00:06:50.251 [2024-07-15 19:01:30.487466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.251 #27 NEW cov: 12078 ft: 14809 corp: 26/139b lim: 10 exec/s: 27 rss: 73Mb L: 2/10 MS: 1 CopyPart- 00:06:50.251 [2024-07-15 19:01:30.527666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000feff cdw11:00000000 00:06:50.251 [2024-07-15 19:01:30.527691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.251 [2024-07-15 19:01:30.527747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:50.251 [2024-07-15 19:01:30.527761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.251 #32 NEW cov: 12078 ft: 14824 corp: 27/144b lim: 10 exec/s: 32 rss: 73Mb L: 5/10 MS: 5 CrossOver-ShuffleBytes-ChangeByte-ShuffleBytes-PersAutoDict- DE: "\376\377\377\377"- 00:06:50.251 [2024-07-15 19:01:30.577941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000feff cdw11:00000000 00:06:50.251 [2024-07-15 19:01:30.577966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.251 [2024-07-15 19:01:30.578040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:50.251 [2024-07-15 19:01:30.578054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.251 [2024-07-15 19:01:30.578109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00002831 cdw11:00000000 00:06:50.251 [2024-07-15 19:01:30.578123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.251 #33 NEW cov: 12078 ft: 14840 corp: 28/150b lim: 10 exec/s: 33 rss: 73Mb L: 6/10 MS: 1 PersAutoDict- DE: "\376\377\377\377"- 00:06:50.251 [2024-07-15 19:01:30.628054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000280a cdw11:00000000 00:06:50.251 [2024-07-15 19:01:30.628088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.251 [2024-07-15 19:01:30.628143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:50.251 [2024-07-15 19:01:30.628158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.251 [2024-07-15 19:01:30.628214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00003aff cdw11:00000000 00:06:50.251 [2024-07-15 19:01:30.628232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.251 #34 NEW cov: 12078 ft: 14889 corp: 29/157b lim: 10 exec/s: 34 rss: 73Mb L: 7/10 MS: 1 ChangeByte- 00:06:50.251 [2024-07-15 19:01:30.668409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a2e cdw11:00000000 00:06:50.251 [2024-07-15 19:01:30.668435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.251 [2024-07-15 19:01:30.668507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:50.251 [2024-07-15 19:01:30.668520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.251 [2024-07-15 19:01:30.668577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff57 cdw11:00000000 00:06:50.251 [2024-07-15 19:01:30.668591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.251 [2024-07-15 19:01:30.668646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00005757 cdw11:00000000 00:06:50.251 [2024-07-15 19:01:30.668659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.251 [2024-07-15 19:01:30.668715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00005657 cdw11:00000000 00:06:50.251 [2024-07-15 19:01:30.668729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:50.510 #35 NEW cov: 12078 ft: 14896 corp: 30/167b lim: 10 exec/s: 35 rss: 73Mb L: 10/10 MS: 1 ChangeByte- 00:06:50.510 [2024-07-15 19:01:30.708584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000028c1 cdw11:00000000 00:06:50.510 [2024-07-15 19:01:30.708610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.510 [2024-07-15 19:01:30.708681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:50.510 [2024-07-15 19:01:30.708696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.510 [2024-07-15 19:01:30.708752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:50.510 [2024-07-15 19:01:30.708771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.510 [2024-07-15 19:01:30.708824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:50.510 [2024-07-15 19:01:30.708837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.510 [2024-07-15 19:01:30.708891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:50.510 [2024-07-15 19:01:30.708904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:50.510 #36 NEW cov: 12078 ft: 14913 corp: 31/177b lim: 10 exec/s: 36 rss: 73Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:06:50.510 [2024-07-15 19:01:30.758139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000e631 cdw11:00000000 00:06:50.510 [2024-07-15 19:01:30.758166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.510 #37 NEW cov: 12078 ft: 14931 corp: 32/179b lim: 10 exec/s: 37 rss: 73Mb L: 2/10 MS: 1 CrossOver- 00:06:50.510 [2024-07-15 19:01:30.798253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:50.510 [2024-07-15 19:01:30.798282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.510 #39 NEW cov: 12078 ft: 14983 corp: 33/181b lim: 10 exec/s: 39 rss: 74Mb L: 2/10 MS: 2 ShuffleBytes-CopyPart- 00:06:50.510 [2024-07-15 19:01:30.838657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000afe cdw11:00000000 00:06:50.510 [2024-07-15 19:01:30.838685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.510 [2024-07-15 19:01:30.838740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:50.510 [2024-07-15 19:01:30.838753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.510 [2024-07-15 19:01:30.838807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff57 cdw11:00000000 00:06:50.510 [2024-07-15 19:01:30.838820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.510 #40 NEW cov: 12078 ft: 14988 corp: 34/187b lim: 10 exec/s: 40 rss: 74Mb L: 6/10 MS: 1 EraseBytes- 00:06:50.510 [2024-07-15 19:01:30.888634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000280a cdw11:00000000 00:06:50.510 [2024-07-15 19:01:30.888663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.510 [2024-07-15 19:01:30.888720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:50.510 [2024-07-15 19:01:30.888736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.510 #41 NEW cov: 12078 ft: 14993 corp: 35/191b lim: 10 exec/s: 41 rss: 74Mb L: 4/10 MS: 1 EraseBytes- 00:06:50.510 [2024-07-15 19:01:30.939060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000afe cdw11:00000000 00:06:50.510 [2024-07-15 19:01:30.939088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.510 [2024-07-15 19:01:30.939145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:50.510 [2024-07-15 19:01:30.939158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.510 [2024-07-15 19:01:30.939216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000a457 cdw11:00000000 00:06:50.510 [2024-07-15 19:01:30.939236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.510 [2024-07-15 19:01:30.939290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00005757 cdw11:00000000 00:06:50.510 [2024-07-15 19:01:30.939304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.769 #42 NEW cov: 12078 ft: 14998 corp: 36/199b lim: 10 exec/s: 42 rss: 74Mb L: 8/10 MS: 1 EraseBytes- 00:06:50.769 [2024-07-15 19:01:30.978765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:50.769 [2024-07-15 19:01:30.978793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.769 #43 NEW cov: 12078 ft: 15030 corp: 37/201b lim: 10 exec/s: 43 rss: 74Mb L: 2/10 MS: 1 CrossOver- 00:06:50.769 [2024-07-15 19:01:31.019237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:50.769 [2024-07-15 19:01:31.019264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.769 [2024-07-15 19:01:31.019337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff0a cdw11:00000000 00:06:50.769 [2024-07-15 19:01:31.019351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.769 [2024-07-15 19:01:31.019408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000feff cdw11:00000000 00:06:50.770 [2024-07-15 19:01:31.019422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.770 [2024-07-15 19:01:31.019476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:50.770 [2024-07-15 19:01:31.019490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.770 #44 NEW cov: 12078 ft: 15061 corp: 38/209b lim: 10 exec/s: 44 rss: 74Mb L: 8/10 MS: 1 ShuffleBytes- 00:06:50.770 [2024-07-15 19:01:31.069498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000afe cdw11:00000000 00:06:50.770 [2024-07-15 19:01:31.069525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.770 [2024-07-15 19:01:31.069584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:50.770 [2024-07-15 19:01:31.069597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.770 [2024-07-15 19:01:31.069670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000fe6e cdw11:00000000 00:06:50.770 [2024-07-15 19:01:31.069683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.770 [2024-07-15 19:01:31.069738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000a457 cdw11:00000000 00:06:50.770 [2024-07-15 19:01:31.069752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.770 [2024-07-15 19:01:31.069808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00005757 cdw11:00000000 00:06:50.770 [2024-07-15 19:01:31.069822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:50.770 #45 NEW cov: 12078 ft: 15130 corp: 39/219b lim: 10 exec/s: 22 rss: 74Mb L: 10/10 MS: 1 CrossOver- 00:06:50.770 #45 DONE cov: 12078 ft: 15130 corp: 39/219b lim: 10 exec/s: 22 rss: 74Mb 00:06:50.770 ###### Recommended dictionary. ###### 00:06:50.770 "\376\377\377\377" # Uses: 3 00:06:50.770 ###### End of recommended dictionary. ###### 00:06:50.770 Done 45 runs in 2 second(s) 00:06:51.029 19:01:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:06:51.029 19:01:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:51.029 19:01:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:51.029 19:01:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:06:51.029 19:01:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:06:51.029 19:01:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:51.029 19:01:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:51.029 19:01:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:51.029 19:01:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:06:51.029 19:01:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:51.029 19:01:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:51.029 19:01:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:06:51.029 19:01:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:06:51.029 19:01:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:51.029 19:01:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:06:51.029 19:01:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:51.029 19:01:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:51.029 19:01:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:51.029 19:01:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:06:51.029 [2024-07-15 19:01:31.287424] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:51.029 [2024-07-15 19:01:31.287495] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid671625 ] 00:06:51.029 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.289 [2024-07-15 19:01:31.498461] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.289 [2024-07-15 19:01:31.571687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.289 [2024-07-15 19:01:31.630932] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.289 [2024-07-15 19:01:31.647233] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:06:51.289 INFO: Running with entropic power schedule (0xFF, 100). 00:06:51.289 INFO: Seed: 3585036695 00:06:51.289 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:51.289 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:51.289 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:51.289 INFO: A corpus is not provided, starting from an empty corpus 00:06:51.289 #2 INITED exec/s: 0 rss: 65Mb 00:06:51.289 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:51.289 This may also happen if the target rejected all inputs we tried so far 00:06:51.289 [2024-07-15 19:01:31.691865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a41 cdw11:00000000 00:06:51.289 [2024-07-15 19:01:31.691900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.806 NEW_FUNC[1/693]: 0x48f380 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:06:51.806 NEW_FUNC[2/693]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:51.806 #7 NEW cov: 11830 ft: 11831 corp: 2/3b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 5 ShuffleBytes-CrossOver-CrossOver-ShuffleBytes-InsertByte- 00:06:51.806 [2024-07-15 19:01:32.062769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000c641 cdw11:00000000 00:06:51.806 [2024-07-15 19:01:32.062813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.806 NEW_FUNC[1/1]: 0x17ad780 in nvme_qpair_check_enabled /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:637 00:06:51.806 #9 NEW cov: 11964 ft: 12365 corp: 3/5b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 2 EraseBytes-InsertByte- 00:06:51.806 [2024-07-15 19:01:32.142834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002c41 cdw11:00000000 00:06:51.806 [2024-07-15 19:01:32.142866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.806 #10 NEW cov: 11970 ft: 12686 corp: 4/7b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 ChangeByte- 00:06:51.806 [2024-07-15 19:01:32.223071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000c641 cdw11:00000000 00:06:51.806 [2024-07-15 19:01:32.223107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.065 #11 NEW cov: 12055 ft: 12960 corp: 5/9b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 ShuffleBytes- 00:06:52.065 [2024-07-15 19:01:32.273169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:52.065 [2024-07-15 19:01:32.273201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.065 #13 NEW cov: 12055 ft: 13049 corp: 6/11b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 2 CopyPart-CopyPart- 00:06:52.065 [2024-07-15 19:01:32.323381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:52.065 [2024-07-15 19:01:32.323413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.065 [2024-07-15 19:01:32.323444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:52.065 [2024-07-15 19:01:32.323460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.065 #14 NEW cov: 12055 ft: 13290 corp: 7/15b lim: 10 exec/s: 0 rss: 72Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:06:52.065 [2024-07-15 19:01:32.383427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000412c cdw11:00000000 00:06:52.065 [2024-07-15 19:01:32.383460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.065 #15 NEW cov: 12055 ft: 13386 corp: 8/17b lim: 10 exec/s: 0 rss: 72Mb L: 2/4 MS: 1 ShuffleBytes- 00:06:52.065 [2024-07-15 19:01:32.463693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000c641 cdw11:00000000 00:06:52.065 [2024-07-15 19:01:32.463725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.324 #16 NEW cov: 12055 ft: 13458 corp: 9/19b lim: 10 exec/s: 0 rss: 73Mb L: 2/4 MS: 1 ShuffleBytes- 00:06:52.324 [2024-07-15 19:01:32.543992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000c641 cdw11:00000000 00:06:52.324 [2024-07-15 19:01:32.544028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.324 [2024-07-15 19:01:32.544060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:52.324 [2024-07-15 19:01:32.544076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.324 [2024-07-15 19:01:32.544104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:52.324 [2024-07-15 19:01:32.544120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.324 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:52.324 #17 NEW cov: 12072 ft: 13747 corp: 10/25b lim: 10 exec/s: 0 rss: 73Mb L: 6/6 MS: 1 InsertRepeatedBytes- 00:06:52.324 [2024-07-15 19:01:32.604129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002c39 cdw11:00000000 00:06:52.324 [2024-07-15 19:01:32.604160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.324 [2024-07-15 19:01:32.604205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00003939 cdw11:00000000 00:06:52.324 [2024-07-15 19:01:32.604228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.324 [2024-07-15 19:01:32.604256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00003939 cdw11:00000000 00:06:52.324 [2024-07-15 19:01:32.604272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.324 [2024-07-15 19:01:32.604299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00003941 cdw11:00000000 00:06:52.324 [2024-07-15 19:01:32.604314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.324 #18 NEW cov: 12072 ft: 13998 corp: 11/33b lim: 10 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:06:52.324 [2024-07-15 19:01:32.664342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000c641 cdw11:00000000 00:06:52.324 [2024-07-15 19:01:32.664371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.324 [2024-07-15 19:01:32.664416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:52.324 [2024-07-15 19:01:32.664432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.324 [2024-07-15 19:01:32.664459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:52.324 [2024-07-15 19:01:32.664475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.324 [2024-07-15 19:01:32.664501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:52.324 [2024-07-15 19:01:32.664516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.324 [2024-07-15 19:01:32.664543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:52.324 [2024-07-15 19:01:32.664557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:52.324 #19 NEW cov: 12072 ft: 14046 corp: 12/43b lim: 10 exec/s: 19 rss: 73Mb L: 10/10 MS: 1 CrossOver- 00:06:52.324 [2024-07-15 19:01:32.744358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e70a cdw11:00000000 00:06:52.324 [2024-07-15 19:01:32.744388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.582 #20 NEW cov: 12072 ft: 14068 corp: 13/45b lim: 10 exec/s: 20 rss: 73Mb L: 2/10 MS: 1 InsertByte- 00:06:52.582 [2024-07-15 19:01:32.794491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000450a cdw11:00000000 00:06:52.582 [2024-07-15 19:01:32.794521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.582 #24 NEW cov: 12072 ft: 14077 corp: 14/48b lim: 10 exec/s: 24 rss: 73Mb L: 3/10 MS: 4 EraseBytes-ChangeBit-CopyPart-CrossOver- 00:06:52.582 [2024-07-15 19:01:32.874700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000041c6 cdw11:00000000 00:06:52.582 [2024-07-15 19:01:32.874730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.582 #25 NEW cov: 12072 ft: 14103 corp: 15/51b lim: 10 exec/s: 25 rss: 73Mb L: 3/10 MS: 1 CopyPart- 00:06:52.582 [2024-07-15 19:01:32.924869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a08 cdw11:00000000 00:06:52.582 [2024-07-15 19:01:32.924900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.582 [2024-07-15 19:01:32.924947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:52.582 [2024-07-15 19:01:32.924964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.582 #26 NEW cov: 12072 ft: 14114 corp: 16/55b lim: 10 exec/s: 26 rss: 73Mb L: 4/10 MS: 1 ChangeBit- 00:06:52.582 [2024-07-15 19:01:33.005055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000041c6 cdw11:00000000 00:06:52.582 [2024-07-15 19:01:33.005086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.840 #27 NEW cov: 12072 ft: 14166 corp: 17/58b lim: 10 exec/s: 27 rss: 73Mb L: 3/10 MS: 1 ChangeBinInt- 00:06:52.841 [2024-07-15 19:01:33.085245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000450a cdw11:00000000 00:06:52.841 [2024-07-15 19:01:33.085275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.841 #28 NEW cov: 12072 ft: 14176 corp: 18/61b lim: 10 exec/s: 28 rss: 73Mb L: 3/10 MS: 1 ShuffleBytes- 00:06:52.841 [2024-07-15 19:01:33.165450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000041c6 cdw11:00000000 00:06:52.841 [2024-07-15 19:01:33.165479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.841 #29 NEW cov: 12072 ft: 14208 corp: 19/64b lim: 10 exec/s: 29 rss: 73Mb L: 3/10 MS: 1 ShuffleBytes- 00:06:52.841 [2024-07-15 19:01:33.215656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004139 cdw11:00000000 00:06:52.841 [2024-07-15 19:01:33.215685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.841 [2024-07-15 19:01:33.215730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00003939 cdw11:00000000 00:06:52.841 [2024-07-15 19:01:33.215745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.841 [2024-07-15 19:01:33.215772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00003941 cdw11:00000000 00:06:52.841 [2024-07-15 19:01:33.215787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.841 #31 NEW cov: 12072 ft: 14237 corp: 20/70b lim: 10 exec/s: 31 rss: 73Mb L: 6/10 MS: 2 EraseBytes-CrossOver- 00:06:52.841 [2024-07-15 19:01:33.265783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000412c cdw11:00000000 00:06:52.841 [2024-07-15 19:01:33.265812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.841 [2024-07-15 19:01:33.265858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00003939 cdw11:00000000 00:06:52.841 [2024-07-15 19:01:33.265873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.841 [2024-07-15 19:01:33.265900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000c63c cdw11:00000000 00:06:52.841 [2024-07-15 19:01:33.265915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.099 #32 NEW cov: 12072 ft: 14259 corp: 21/76b lim: 10 exec/s: 32 rss: 73Mb L: 6/10 MS: 1 CrossOver- 00:06:53.099 [2024-07-15 19:01:33.345945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a08 cdw11:00000000 00:06:53.099 [2024-07-15 19:01:33.345974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.099 [2024-07-15 19:01:33.346019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 00:06:53.099 [2024-07-15 19:01:33.346035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.099 #33 NEW cov: 12072 ft: 14265 corp: 22/80b lim: 10 exec/s: 33 rss: 73Mb L: 4/10 MS: 1 ChangeBinInt- 00:06:53.099 [2024-07-15 19:01:33.426156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000041c6 cdw11:00000000 00:06:53.099 [2024-07-15 19:01:33.426185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.099 [2024-07-15 19:01:33.426240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000041c6 cdw11:00000000 00:06:53.099 [2024-07-15 19:01:33.426256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.099 #34 NEW cov: 12072 ft: 14336 corp: 23/85b lim: 10 exec/s: 34 rss: 73Mb L: 5/10 MS: 1 CopyPart- 00:06:53.099 [2024-07-15 19:01:33.506329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e725 cdw11:00000000 00:06:53.099 [2024-07-15 19:01:33.506359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.358 #35 NEW cov: 12072 ft: 14353 corp: 24/88b lim: 10 exec/s: 35 rss: 73Mb L: 3/10 MS: 1 InsertByte- 00:06:53.358 [2024-07-15 19:01:33.586563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000041c6 cdw11:00000000 00:06:53.358 [2024-07-15 19:01:33.586595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.358 #36 NEW cov: 12079 ft: 14385 corp: 25/90b lim: 10 exec/s: 36 rss: 73Mb L: 2/10 MS: 1 ShuffleBytes- 00:06:53.358 [2024-07-15 19:01:33.666728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002c41 cdw11:00000000 00:06:53.358 [2024-07-15 19:01:33.666759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.358 #37 NEW cov: 12079 ft: 14411 corp: 26/92b lim: 10 exec/s: 18 rss: 73Mb L: 2/10 MS: 1 ShuffleBytes- 00:06:53.358 #37 DONE cov: 12079 ft: 14411 corp: 26/92b lim: 10 exec/s: 18 rss: 73Mb 00:06:53.358 Done 37 runs in 2 second(s) 00:06:53.618 19:01:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:06:53.618 19:01:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:53.618 19:01:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:53.618 19:01:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:06:53.618 19:01:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:06:53.618 19:01:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:53.618 19:01:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:53.618 19:01:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:53.618 19:01:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:06:53.618 19:01:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:53.618 19:01:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:53.618 19:01:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:06:53.618 19:01:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:06:53.618 19:01:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:53.618 19:01:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:06:53.618 19:01:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:53.618 19:01:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:53.618 19:01:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:53.618 19:01:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:06:53.618 [2024-07-15 19:01:33.880964] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:53.618 [2024-07-15 19:01:33.881035] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid671962 ] 00:06:53.618 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.899 [2024-07-15 19:01:34.095793] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.899 [2024-07-15 19:01:34.166140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.899 [2024-07-15 19:01:34.225433] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.899 [2024-07-15 19:01:34.241732] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:06:53.899 INFO: Running with entropic power schedule (0xFF, 100). 00:06:53.899 INFO: Seed: 1885074709 00:06:53.899 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:53.899 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:53.899 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:53.899 INFO: A corpus is not provided, starting from an empty corpus 00:06:53.899 [2024-07-15 19:01:34.286388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.899 [2024-07-15 19:01:34.286420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.899 #2 INITED cov: 11862 ft: 11863 corp: 1/1b exec/s: 0 rss: 70Mb 00:06:54.159 [2024-07-15 19:01:34.336415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.159 [2024-07-15 19:01:34.336444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.159 #3 NEW cov: 11992 ft: 12514 corp: 2/2b lim: 5 exec/s: 0 rss: 71Mb L: 1/1 MS: 1 ChangeBit- 00:06:54.159 [2024-07-15 19:01:34.416756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.159 [2024-07-15 19:01:34.416786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.159 [2024-07-15 19:01:34.416834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.159 [2024-07-15 19:01:34.416850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.159 [2024-07-15 19:01:34.416880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.159 [2024-07-15 19:01:34.416895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.159 [2024-07-15 19:01:34.416923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.159 [2024-07-15 19:01:34.416939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.159 #4 NEW cov: 11998 ft: 13565 corp: 3/6b lim: 5 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:06:54.159 [2024-07-15 19:01:34.476715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.159 [2024-07-15 19:01:34.476744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.159 #5 NEW cov: 12083 ft: 13851 corp: 4/7b lim: 5 exec/s: 0 rss: 71Mb L: 1/4 MS: 1 ChangeByte- 00:06:54.159 [2024-07-15 19:01:34.556912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.159 [2024-07-15 19:01:34.556941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.439 #6 NEW cov: 12083 ft: 13944 corp: 5/8b lim: 5 exec/s: 0 rss: 71Mb L: 1/4 MS: 1 CopyPart- 00:06:54.439 [2024-07-15 19:01:34.607044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.439 [2024-07-15 19:01:34.607073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.439 #7 NEW cov: 12083 ft: 14031 corp: 6/9b lim: 5 exec/s: 0 rss: 72Mb L: 1/4 MS: 1 ShuffleBytes- 00:06:54.439 [2024-07-15 19:01:34.687472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.439 [2024-07-15 19:01:34.687501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.439 [2024-07-15 19:01:34.687549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.439 [2024-07-15 19:01:34.687565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.439 [2024-07-15 19:01:34.687595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.439 [2024-07-15 19:01:34.687611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.439 [2024-07-15 19:01:34.687640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.439 [2024-07-15 19:01:34.687659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.439 [2024-07-15 19:01:34.687689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.439 [2024-07-15 19:01:34.687704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:54.439 #8 NEW cov: 12083 ft: 14143 corp: 7/14b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 InsertByte- 00:06:54.439 [2024-07-15 19:01:34.767663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.439 [2024-07-15 19:01:34.767692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.439 [2024-07-15 19:01:34.767740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.439 [2024-07-15 19:01:34.767756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.439 [2024-07-15 19:01:34.767785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.439 [2024-07-15 19:01:34.767801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.439 [2024-07-15 19:01:34.767829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.439 [2024-07-15 19:01:34.767844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.439 [2024-07-15 19:01:34.767873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.439 [2024-07-15 19:01:34.767888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:54.439 #9 NEW cov: 12083 ft: 14273 corp: 8/19b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:54.696 [2024-07-15 19:01:34.847677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.696 [2024-07-15 19:01:34.847708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.696 #10 NEW cov: 12083 ft: 14339 corp: 9/20b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:54.696 [2024-07-15 19:01:34.897756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.696 [2024-07-15 19:01:34.897785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.696 #11 NEW cov: 12083 ft: 14390 corp: 10/21b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 CopyPart- 00:06:54.696 [2024-07-15 19:01:34.947942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.696 [2024-07-15 19:01:34.947975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.696 [2024-07-15 19:01:34.948024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.696 [2024-07-15 19:01:34.948045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.696 #12 NEW cov: 12083 ft: 14576 corp: 11/23b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 InsertByte- 00:06:54.696 [2024-07-15 19:01:35.008135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.697 [2024-07-15 19:01:35.008169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.697 [2024-07-15 19:01:35.008225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.697 [2024-07-15 19:01:35.008241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.697 #13 NEW cov: 12083 ft: 14603 corp: 12/25b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 InsertByte- 00:06:54.697 [2024-07-15 19:01:35.088513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.697 [2024-07-15 19:01:35.088546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.697 [2024-07-15 19:01:35.088594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.697 [2024-07-15 19:01:35.088610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.697 [2024-07-15 19:01:35.088639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.697 [2024-07-15 19:01:35.088655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.697 [2024-07-15 19:01:35.088684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.697 [2024-07-15 19:01:35.088699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.697 [2024-07-15 19:01:35.088728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.697 [2024-07-15 19:01:35.088743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:54.954 #14 NEW cov: 12083 ft: 14643 corp: 13/30b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 ChangeBit- 00:06:54.954 [2024-07-15 19:01:35.168552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.954 [2024-07-15 19:01:35.168584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.954 [2024-07-15 19:01:35.168618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.954 [2024-07-15 19:01:35.168633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.213 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:55.213 #15 NEW cov: 12106 ft: 14692 corp: 14/32b lim: 5 exec/s: 15 rss: 73Mb L: 2/5 MS: 1 CopyPart- 00:06:55.213 [2024-07-15 19:01:35.519522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.213 [2024-07-15 19:01:35.519569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.213 #16 NEW cov: 12106 ft: 14761 corp: 15/33b lim: 5 exec/s: 16 rss: 73Mb L: 1/5 MS: 1 CopyPart- 00:06:55.213 [2024-07-15 19:01:35.599708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.213 [2024-07-15 19:01:35.599741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.213 [2024-07-15 19:01:35.599774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.213 [2024-07-15 19:01:35.599790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.213 [2024-07-15 19:01:35.599819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.213 [2024-07-15 19:01:35.599835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.471 #17 NEW cov: 12106 ft: 14928 corp: 16/36b lim: 5 exec/s: 17 rss: 73Mb L: 3/5 MS: 1 CMP- DE: "\000\011"- 00:06:55.471 [2024-07-15 19:01:35.659744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.471 [2024-07-15 19:01:35.659775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.471 #18 NEW cov: 12106 ft: 14959 corp: 17/37b lim: 5 exec/s: 18 rss: 73Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:55.471 [2024-07-15 19:01:35.720108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.471 [2024-07-15 19:01:35.720137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.471 [2024-07-15 19:01:35.720186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.471 [2024-07-15 19:01:35.720201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.471 [2024-07-15 19:01:35.720238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.471 [2024-07-15 19:01:35.720254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.471 [2024-07-15 19:01:35.720283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.471 [2024-07-15 19:01:35.720298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.471 [2024-07-15 19:01:35.720327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.471 [2024-07-15 19:01:35.720342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:55.471 #19 NEW cov: 12106 ft: 14976 corp: 18/42b lim: 5 exec/s: 19 rss: 73Mb L: 5/5 MS: 1 ChangeBinInt- 00:06:55.471 [2024-07-15 19:01:35.780304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.471 [2024-07-15 19:01:35.780334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.471 [2024-07-15 19:01:35.780368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.471 [2024-07-15 19:01:35.780387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.471 [2024-07-15 19:01:35.780417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.471 [2024-07-15 19:01:35.780432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.471 [2024-07-15 19:01:35.780477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.471 [2024-07-15 19:01:35.780493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.471 [2024-07-15 19:01:35.780522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.471 [2024-07-15 19:01:35.780538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:55.471 #20 NEW cov: 12106 ft: 15017 corp: 19/47b lim: 5 exec/s: 20 rss: 73Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:55.471 [2024-07-15 19:01:35.860373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.471 [2024-07-15 19:01:35.860404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.471 [2024-07-15 19:01:35.860438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.471 [2024-07-15 19:01:35.860453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.471 [2024-07-15 19:01:35.860483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.471 [2024-07-15 19:01:35.860498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.730 #21 NEW cov: 12106 ft: 15045 corp: 20/50b lim: 5 exec/s: 21 rss: 73Mb L: 3/5 MS: 1 ChangeByte- 00:06:55.730 [2024-07-15 19:01:35.940459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.730 [2024-07-15 19:01:35.940489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.730 #22 NEW cov: 12106 ft: 15061 corp: 21/51b lim: 5 exec/s: 22 rss: 73Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:55.730 [2024-07-15 19:01:35.990591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.730 [2024-07-15 19:01:35.990624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.730 #23 NEW cov: 12106 ft: 15095 corp: 22/52b lim: 5 exec/s: 23 rss: 73Mb L: 1/5 MS: 1 ChangeBinInt- 00:06:55.730 [2024-07-15 19:01:36.040730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.730 [2024-07-15 19:01:36.040762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.730 #24 NEW cov: 12106 ft: 15130 corp: 23/53b lim: 5 exec/s: 24 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:06:55.730 [2024-07-15 19:01:36.091061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.730 [2024-07-15 19:01:36.091095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.730 [2024-07-15 19:01:36.091144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.730 [2024-07-15 19:01:36.091159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.730 [2024-07-15 19:01:36.091189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.730 [2024-07-15 19:01:36.091204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.730 [2024-07-15 19:01:36.091240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.730 [2024-07-15 19:01:36.091255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.730 [2024-07-15 19:01:36.091284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.730 [2024-07-15 19:01:36.091299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:55.730 #25 NEW cov: 12106 ft: 15141 corp: 24/58b lim: 5 exec/s: 25 rss: 73Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:55.989 [2024-07-15 19:01:36.171271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.989 [2024-07-15 19:01:36.171301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.989 [2024-07-15 19:01:36.171348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.989 [2024-07-15 19:01:36.171364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.989 [2024-07-15 19:01:36.171393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.989 [2024-07-15 19:01:36.171409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.989 [2024-07-15 19:01:36.171438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.989 [2024-07-15 19:01:36.171453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.989 [2024-07-15 19:01:36.171481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.989 [2024-07-15 19:01:36.171496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:55.989 #26 NEW cov: 12107 ft: 15149 corp: 25/63b lim: 5 exec/s: 26 rss: 73Mb L: 5/5 MS: 1 ChangeByte- 00:06:55.989 [2024-07-15 19:01:36.221427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.989 [2024-07-15 19:01:36.221457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.989 [2024-07-15 19:01:36.221490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.989 [2024-07-15 19:01:36.221510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.989 [2024-07-15 19:01:36.221540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.989 [2024-07-15 19:01:36.221555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.989 [2024-07-15 19:01:36.221584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.989 [2024-07-15 19:01:36.221599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.989 [2024-07-15 19:01:36.221628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.989 [2024-07-15 19:01:36.221642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:55.989 #27 NEW cov: 12107 ft: 15184 corp: 26/68b lim: 5 exec/s: 27 rss: 73Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:55.989 [2024-07-15 19:01:36.281559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.989 [2024-07-15 19:01:36.281592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.989 [2024-07-15 19:01:36.281639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.989 [2024-07-15 19:01:36.281655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.989 [2024-07-15 19:01:36.281684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.989 [2024-07-15 19:01:36.281700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.989 [2024-07-15 19:01:36.281729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.989 [2024-07-15 19:01:36.281743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.989 [2024-07-15 19:01:36.281772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.989 [2024-07-15 19:01:36.281787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:55.989 #28 NEW cov: 12107 ft: 15192 corp: 27/73b lim: 5 exec/s: 14 rss: 73Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:55.989 #28 DONE cov: 12107 ft: 15192 corp: 27/73b lim: 5 exec/s: 14 rss: 73Mb 00:06:55.989 ###### Recommended dictionary. ###### 00:06:55.989 "\000\011" # Uses: 0 00:06:55.989 ###### End of recommended dictionary. ###### 00:06:55.989 Done 28 runs in 2 second(s) 00:06:56.247 19:01:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:06:56.247 19:01:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:56.247 19:01:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:56.247 19:01:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:06:56.247 19:01:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:06:56.247 19:01:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:56.247 19:01:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:56.247 19:01:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:56.247 19:01:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:06:56.247 19:01:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:56.247 19:01:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:56.247 19:01:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:06:56.247 19:01:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:06:56.247 19:01:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:56.247 19:01:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:06:56.247 19:01:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:56.247 19:01:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:56.247 19:01:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:56.248 19:01:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:06:56.248 [2024-07-15 19:01:36.501168] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:56.248 [2024-07-15 19:01:36.501247] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid672267 ] 00:06:56.248 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.505 [2024-07-15 19:01:36.718899] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.505 [2024-07-15 19:01:36.788938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.505 [2024-07-15 19:01:36.848591] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:56.505 [2024-07-15 19:01:36.864891] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:06:56.505 INFO: Running with entropic power schedule (0xFF, 100). 00:06:56.505 INFO: Seed: 213121867 00:06:56.505 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:56.505 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:56.505 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:56.505 INFO: A corpus is not provided, starting from an empty corpus 00:06:56.505 [2024-07-15 19:01:36.930227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.505 [2024-07-15 19:01:36.930257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.764 #2 INITED cov: 11862 ft: 11862 corp: 1/1b exec/s: 0 rss: 70Mb 00:06:56.764 [2024-07-15 19:01:36.970915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.764 [2024-07-15 19:01:36.970941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.764 [2024-07-15 19:01:36.971015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.764 [2024-07-15 19:01:36.971030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.764 [2024-07-15 19:01:36.971088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.764 [2024-07-15 19:01:36.971105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.764 [2024-07-15 19:01:36.971161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.764 [2024-07-15 19:01:36.971175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.764 [2024-07-15 19:01:36.971233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.764 [2024-07-15 19:01:36.971246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:56.764 #3 NEW cov: 11992 ft: 13388 corp: 2/6b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:56.764 [2024-07-15 19:01:37.030435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.764 [2024-07-15 19:01:37.030462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.764 #4 NEW cov: 11998 ft: 13597 corp: 3/7b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 ChangeBit- 00:06:56.764 [2024-07-15 19:01:37.070686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.764 [2024-07-15 19:01:37.070712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.764 [2024-07-15 19:01:37.070785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.764 [2024-07-15 19:01:37.070800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.764 #5 NEW cov: 12083 ft: 14008 corp: 4/9b lim: 5 exec/s: 0 rss: 70Mb L: 2/5 MS: 1 CrossOver- 00:06:56.764 [2024-07-15 19:01:37.120790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.764 [2024-07-15 19:01:37.120815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.764 [2024-07-15 19:01:37.120872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.764 [2024-07-15 19:01:37.120885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.764 #6 NEW cov: 12083 ft: 14072 corp: 5/11b lim: 5 exec/s: 0 rss: 70Mb L: 2/5 MS: 1 CopyPart- 00:06:56.764 [2024-07-15 19:01:37.160756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.764 [2024-07-15 19:01:37.160781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.764 #7 NEW cov: 12083 ft: 14172 corp: 6/12b lim: 5 exec/s: 0 rss: 71Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:57.022 [2024-07-15 19:01:37.201543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.022 [2024-07-15 19:01:37.201567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.022 [2024-07-15 19:01:37.201624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.022 [2024-07-15 19:01:37.201638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.022 [2024-07-15 19:01:37.201695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.022 [2024-07-15 19:01:37.201708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.022 [2024-07-15 19:01:37.201765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.022 [2024-07-15 19:01:37.201778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.022 [2024-07-15 19:01:37.201834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.022 [2024-07-15 19:01:37.201848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:57.022 #8 NEW cov: 12083 ft: 14252 corp: 7/17b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 CopyPart- 00:06:57.022 [2024-07-15 19:01:37.251661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.022 [2024-07-15 19:01:37.251685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.022 [2024-07-15 19:01:37.251742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.022 [2024-07-15 19:01:37.251757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.022 [2024-07-15 19:01:37.251813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.022 [2024-07-15 19:01:37.251826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.022 [2024-07-15 19:01:37.251885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.022 [2024-07-15 19:01:37.251898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.022 [2024-07-15 19:01:37.251955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.022 [2024-07-15 19:01:37.251969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:57.022 #9 NEW cov: 12083 ft: 14293 corp: 8/22b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 ChangeBit- 00:06:57.022 [2024-07-15 19:01:37.291781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.022 [2024-07-15 19:01:37.291805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.022 [2024-07-15 19:01:37.291865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.022 [2024-07-15 19:01:37.291879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.022 [2024-07-15 19:01:37.291938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.022 [2024-07-15 19:01:37.291952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.022 [2024-07-15 19:01:37.292011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.022 [2024-07-15 19:01:37.292024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.022 [2024-07-15 19:01:37.292082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.023 [2024-07-15 19:01:37.292095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:57.023 #10 NEW cov: 12083 ft: 14325 corp: 9/27b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 CopyPart- 00:06:57.023 [2024-07-15 19:01:37.331908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.023 [2024-07-15 19:01:37.331932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.023 [2024-07-15 19:01:37.332007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.023 [2024-07-15 19:01:37.332022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.023 [2024-07-15 19:01:37.332091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.023 [2024-07-15 19:01:37.332109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.023 [2024-07-15 19:01:37.332173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.023 [2024-07-15 19:01:37.332189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.023 [2024-07-15 19:01:37.332258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.023 [2024-07-15 19:01:37.332273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:57.023 #11 NEW cov: 12083 ft: 14377 corp: 10/32b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 CrossOver- 00:06:57.023 [2024-07-15 19:01:37.382002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.023 [2024-07-15 19:01:37.382027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.023 [2024-07-15 19:01:37.382100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.023 [2024-07-15 19:01:37.382115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.023 [2024-07-15 19:01:37.382173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.023 [2024-07-15 19:01:37.382187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.023 [2024-07-15 19:01:37.382245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.023 [2024-07-15 19:01:37.382259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.023 [2024-07-15 19:01:37.382325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.023 [2024-07-15 19:01:37.382338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:57.023 #12 NEW cov: 12083 ft: 14409 corp: 11/37b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 CopyPart- 00:06:57.023 [2024-07-15 19:01:37.421653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.023 [2024-07-15 19:01:37.421678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.023 [2024-07-15 19:01:37.421734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.023 [2024-07-15 19:01:37.421748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.280 #13 NEW cov: 12083 ft: 14507 corp: 12/39b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 CopyPart- 00:06:57.280 [2024-07-15 19:01:37.471633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.280 [2024-07-15 19:01:37.471658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.281 #14 NEW cov: 12083 ft: 14564 corp: 13/40b lim: 5 exec/s: 0 rss: 71Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:57.281 [2024-07-15 19:01:37.511905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.281 [2024-07-15 19:01:37.511931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.281 [2024-07-15 19:01:37.511991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.281 [2024-07-15 19:01:37.512006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.281 #15 NEW cov: 12083 ft: 14572 corp: 14/42b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 InsertByte- 00:06:57.281 [2024-07-15 19:01:37.551862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.281 [2024-07-15 19:01:37.551888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.281 #16 NEW cov: 12083 ft: 14657 corp: 15/43b lim: 5 exec/s: 0 rss: 71Mb L: 1/5 MS: 1 CopyPart- 00:06:57.281 [2024-07-15 19:01:37.602613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.281 [2024-07-15 19:01:37.602637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.281 [2024-07-15 19:01:37.602712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.281 [2024-07-15 19:01:37.602726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.281 [2024-07-15 19:01:37.602794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.281 [2024-07-15 19:01:37.602811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.281 [2024-07-15 19:01:37.602873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.281 [2024-07-15 19:01:37.602892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.281 [2024-07-15 19:01:37.602954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.281 [2024-07-15 19:01:37.602970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:57.281 #17 NEW cov: 12083 ft: 14718 corp: 16/48b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 CrossOver- 00:06:57.281 [2024-07-15 19:01:37.652451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.281 [2024-07-15 19:01:37.652475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.281 [2024-07-15 19:01:37.652550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.281 [2024-07-15 19:01:37.652564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.281 [2024-07-15 19:01:37.652621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.281 [2024-07-15 19:01:37.652635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.281 #18 NEW cov: 12083 ft: 14874 corp: 17/51b lim: 5 exec/s: 0 rss: 71Mb L: 3/5 MS: 1 CrossOver- 00:06:57.281 [2024-07-15 19:01:37.692921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.281 [2024-07-15 19:01:37.692946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.281 [2024-07-15 19:01:37.693002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.281 [2024-07-15 19:01:37.693016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.281 [2024-07-15 19:01:37.693071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.281 [2024-07-15 19:01:37.693085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.281 [2024-07-15 19:01:37.693143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.281 [2024-07-15 19:01:37.693156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.281 [2024-07-15 19:01:37.693213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.281 [2024-07-15 19:01:37.693230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:57.539 #19 NEW cov: 12083 ft: 14883 corp: 18/56b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 ChangeBinInt- 00:06:57.539 [2024-07-15 19:01:37.732671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.539 [2024-07-15 19:01:37.732697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.539 [2024-07-15 19:01:37.732757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.539 [2024-07-15 19:01:37.732772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.539 [2024-07-15 19:01:37.732827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.539 [2024-07-15 19:01:37.732841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.539 #20 NEW cov: 12083 ft: 14901 corp: 19/59b lim: 5 exec/s: 0 rss: 72Mb L: 3/5 MS: 1 EraseBytes- 00:06:57.539 [2024-07-15 19:01:37.783126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.539 [2024-07-15 19:01:37.783152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.539 [2024-07-15 19:01:37.783237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.539 [2024-07-15 19:01:37.783252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.539 [2024-07-15 19:01:37.783308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.539 [2024-07-15 19:01:37.783321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.539 [2024-07-15 19:01:37.783388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.539 [2024-07-15 19:01:37.783401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.539 [2024-07-15 19:01:37.783455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.539 [2024-07-15 19:01:37.783468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:57.797 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:57.797 #21 NEW cov: 12106 ft: 14937 corp: 20/64b lim: 5 exec/s: 21 rss: 73Mb L: 5/5 MS: 1 CrossOver- 00:06:57.797 [2024-07-15 19:01:38.113823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.797 [2024-07-15 19:01:38.113876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.798 [2024-07-15 19:01:38.113961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.798 [2024-07-15 19:01:38.113984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.798 #22 NEW cov: 12106 ft: 15040 corp: 21/66b lim: 5 exec/s: 22 rss: 73Mb L: 2/5 MS: 1 InsertByte- 00:06:57.798 [2024-07-15 19:01:38.173869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.798 [2024-07-15 19:01:38.173897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.798 [2024-07-15 19:01:38.173954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.798 [2024-07-15 19:01:38.173972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.798 [2024-07-15 19:01:38.174026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.798 [2024-07-15 19:01:38.174040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.798 [2024-07-15 19:01:38.174095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.798 [2024-07-15 19:01:38.174109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.798 #23 NEW cov: 12106 ft: 15058 corp: 22/70b lim: 5 exec/s: 23 rss: 73Mb L: 4/5 MS: 1 CopyPart- 00:06:57.798 [2024-07-15 19:01:38.223862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.798 [2024-07-15 19:01:38.223888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.798 [2024-07-15 19:01:38.223941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.798 [2024-07-15 19:01:38.223956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.798 [2024-07-15 19:01:38.224009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.798 [2024-07-15 19:01:38.224022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.057 #24 NEW cov: 12106 ft: 15070 corp: 23/73b lim: 5 exec/s: 24 rss: 73Mb L: 3/5 MS: 1 ChangeBinInt- 00:06:58.057 [2024-07-15 19:01:38.273705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.057 [2024-07-15 19:01:38.273730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.057 #25 NEW cov: 12106 ft: 15089 corp: 24/74b lim: 5 exec/s: 25 rss: 73Mb L: 1/5 MS: 1 EraseBytes- 00:06:58.057 [2024-07-15 19:01:38.324439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.057 [2024-07-15 19:01:38.324464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.057 [2024-07-15 19:01:38.324535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.057 [2024-07-15 19:01:38.324549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.057 [2024-07-15 19:01:38.324604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.057 [2024-07-15 19:01:38.324617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.057 [2024-07-15 19:01:38.324671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.057 [2024-07-15 19:01:38.324684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.057 [2024-07-15 19:01:38.324739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.057 [2024-07-15 19:01:38.324757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:58.057 #26 NEW cov: 12106 ft: 15096 corp: 25/79b lim: 5 exec/s: 26 rss: 73Mb L: 5/5 MS: 1 ChangeBit- 00:06:58.057 [2024-07-15 19:01:38.373927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.057 [2024-07-15 19:01:38.373953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.057 #27 NEW cov: 12106 ft: 15203 corp: 26/80b lim: 5 exec/s: 27 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:06:58.057 [2024-07-15 19:01:38.414488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.057 [2024-07-15 19:01:38.414513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.057 [2024-07-15 19:01:38.414585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.057 [2024-07-15 19:01:38.414599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.057 [2024-07-15 19:01:38.414652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.057 [2024-07-15 19:01:38.414666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.057 [2024-07-15 19:01:38.414719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.057 [2024-07-15 19:01:38.414732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.057 #28 NEW cov: 12106 ft: 15211 corp: 27/84b lim: 5 exec/s: 28 rss: 73Mb L: 4/5 MS: 1 CMP- DE: "\001\002"- 00:06:58.057 [2024-07-15 19:01:38.454170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.057 [2024-07-15 19:01:38.454195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.057 #29 NEW cov: 12106 ft: 15219 corp: 28/85b lim: 5 exec/s: 29 rss: 73Mb L: 1/5 MS: 1 EraseBytes- 00:06:58.316 [2024-07-15 19:01:38.494479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.316 [2024-07-15 19:01:38.494504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.316 [2024-07-15 19:01:38.494575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.316 [2024-07-15 19:01:38.494590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.316 #30 NEW cov: 12106 ft: 15255 corp: 29/87b lim: 5 exec/s: 30 rss: 73Mb L: 2/5 MS: 1 EraseBytes- 00:06:58.316 [2024-07-15 19:01:38.544486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.316 [2024-07-15 19:01:38.544511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.316 #31 NEW cov: 12106 ft: 15300 corp: 30/88b lim: 5 exec/s: 31 rss: 73Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:58.316 [2024-07-15 19:01:38.584805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.316 [2024-07-15 19:01:38.584831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.316 [2024-07-15 19:01:38.584882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.316 [2024-07-15 19:01:38.584896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.316 [2024-07-15 19:01:38.584947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.316 [2024-07-15 19:01:38.584960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.316 #32 NEW cov: 12106 ft: 15304 corp: 31/91b lim: 5 exec/s: 32 rss: 73Mb L: 3/5 MS: 1 CrossOver- 00:06:58.316 [2024-07-15 19:01:38.634958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.316 [2024-07-15 19:01:38.634982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.316 [2024-07-15 19:01:38.635052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.316 [2024-07-15 19:01:38.635065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.316 [2024-07-15 19:01:38.635127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.316 [2024-07-15 19:01:38.635146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.316 #33 NEW cov: 12106 ft: 15342 corp: 32/94b lim: 5 exec/s: 33 rss: 73Mb L: 3/5 MS: 1 CrossOver- 00:06:58.316 [2024-07-15 19:01:38.675258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.316 [2024-07-15 19:01:38.675293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.316 [2024-07-15 19:01:38.675373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.316 [2024-07-15 19:01:38.675387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.316 [2024-07-15 19:01:38.675439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.316 [2024-07-15 19:01:38.675453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.316 [2024-07-15 19:01:38.675507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.316 [2024-07-15 19:01:38.675520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.316 #34 NEW cov: 12106 ft: 15349 corp: 33/98b lim: 5 exec/s: 34 rss: 73Mb L: 4/5 MS: 1 EraseBytes- 00:06:58.316 [2024-07-15 19:01:38.715468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.316 [2024-07-15 19:01:38.715492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.316 [2024-07-15 19:01:38.715550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.316 [2024-07-15 19:01:38.715564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.316 [2024-07-15 19:01:38.715619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.316 [2024-07-15 19:01:38.715632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.316 [2024-07-15 19:01:38.715685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.316 [2024-07-15 19:01:38.715699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.316 [2024-07-15 19:01:38.715751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.316 [2024-07-15 19:01:38.715764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:58.575 #35 NEW cov: 12106 ft: 15367 corp: 34/103b lim: 5 exec/s: 35 rss: 73Mb L: 5/5 MS: 1 CopyPart- 00:06:58.575 [2024-07-15 19:01:38.765624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.575 [2024-07-15 19:01:38.765651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.575 [2024-07-15 19:01:38.765723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.575 [2024-07-15 19:01:38.765738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.575 [2024-07-15 19:01:38.765792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.575 [2024-07-15 19:01:38.765805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.575 [2024-07-15 19:01:38.765856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.575 [2024-07-15 19:01:38.765870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.575 [2024-07-15 19:01:38.765921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.575 [2024-07-15 19:01:38.765935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:58.575 #36 NEW cov: 12106 ft: 15383 corp: 35/108b lim: 5 exec/s: 36 rss: 73Mb L: 5/5 MS: 1 CrossOver- 00:06:58.575 [2024-07-15 19:01:38.805725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.575 [2024-07-15 19:01:38.805749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.575 [2024-07-15 19:01:38.805820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.575 [2024-07-15 19:01:38.805834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.575 [2024-07-15 19:01:38.805889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.575 [2024-07-15 19:01:38.805903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.575 [2024-07-15 19:01:38.805956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.575 [2024-07-15 19:01:38.805969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.575 [2024-07-15 19:01:38.806024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.575 [2024-07-15 19:01:38.806038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:58.575 #37 NEW cov: 12106 ft: 15390 corp: 36/113b lim: 5 exec/s: 37 rss: 73Mb L: 5/5 MS: 1 CrossOver- 00:06:58.575 [2024-07-15 19:01:38.845573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.575 [2024-07-15 19:01:38.845598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.575 [2024-07-15 19:01:38.845653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.575 [2024-07-15 19:01:38.845667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.575 [2024-07-15 19:01:38.845718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.575 [2024-07-15 19:01:38.845732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.575 #38 NEW cov: 12106 ft: 15428 corp: 37/116b lim: 5 exec/s: 38 rss: 73Mb L: 3/5 MS: 1 EraseBytes- 00:06:58.575 [2024-07-15 19:01:38.895563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.575 [2024-07-15 19:01:38.895587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.575 [2024-07-15 19:01:38.895642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.575 [2024-07-15 19:01:38.895655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.575 #39 NEW cov: 12106 ft: 15443 corp: 38/118b lim: 5 exec/s: 19 rss: 74Mb L: 2/5 MS: 1 PersAutoDict- DE: "\001\002"- 00:06:58.575 #39 DONE cov: 12106 ft: 15443 corp: 38/118b lim: 5 exec/s: 19 rss: 74Mb 00:06:58.575 ###### Recommended dictionary. ###### 00:06:58.575 "\001\002" # Uses: 1 00:06:58.575 ###### End of recommended dictionary. ###### 00:06:58.575 Done 39 runs in 2 second(s) 00:06:58.835 19:01:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:06:58.835 19:01:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:58.835 19:01:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:58.835 19:01:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:06:58.835 19:01:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:06:58.835 19:01:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:58.835 19:01:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:58.835 19:01:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:58.835 19:01:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:06:58.835 19:01:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:58.835 19:01:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:58.835 19:01:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:06:58.835 19:01:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:06:58.835 19:01:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:58.835 19:01:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:06:58.835 19:01:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:58.835 19:01:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:58.835 19:01:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:58.835 19:01:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:06:58.835 [2024-07-15 19:01:39.103009] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:58.835 [2024-07-15 19:01:39.103078] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid672576 ] 00:06:58.835 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.094 [2024-07-15 19:01:39.324115] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.094 [2024-07-15 19:01:39.395688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.094 [2024-07-15 19:01:39.455385] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:59.094 [2024-07-15 19:01:39.471688] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:06:59.094 INFO: Running with entropic power schedule (0xFF, 100). 00:06:59.094 INFO: Seed: 2818127705 00:06:59.094 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:59.094 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:59.094 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:59.094 INFO: A corpus is not provided, starting from an empty corpus 00:06:59.094 #2 INITED exec/s: 0 rss: 64Mb 00:06:59.094 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:59.094 This may also happen if the target rejected all inputs we tried so far 00:06:59.352 [2024-07-15 19:01:39.530988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.352 [2024-07-15 19:01:39.531018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.352 [2024-07-15 19:01:39.531080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.352 [2024-07-15 19:01:39.531094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.352 [2024-07-15 19:01:39.531152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.352 [2024-07-15 19:01:39.531166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.612 NEW_FUNC[1/695]: 0x490cf0 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:06:59.612 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:59.612 #3 NEW cov: 11885 ft: 11885 corp: 2/29b lim: 40 exec/s: 0 rss: 72Mb L: 28/28 MS: 1 InsertRepeatedBytes- 00:06:59.612 [2024-07-15 19:01:39.872002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a6565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.612 [2024-07-15 19:01:39.872060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.612 [2024-07-15 19:01:39.872148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.612 [2024-07-15 19:01:39.872175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.612 [2024-07-15 19:01:39.872267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.612 [2024-07-15 19:01:39.872292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.612 #4 NEW cov: 12015 ft: 12560 corp: 3/58b lim: 40 exec/s: 0 rss: 72Mb L: 29/29 MS: 1 CrossOver- 00:06:59.612 [2024-07-15 19:01:39.931843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a6565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.612 [2024-07-15 19:01:39.931870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.612 [2024-07-15 19:01:39.931930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.612 [2024-07-15 19:01:39.931943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.612 [2024-07-15 19:01:39.932003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.612 [2024-07-15 19:01:39.932017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.612 #5 NEW cov: 12021 ft: 12813 corp: 4/87b lim: 40 exec/s: 0 rss: 72Mb L: 29/29 MS: 1 ChangeBit- 00:06:59.612 [2024-07-15 19:01:39.981843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a656565 cdw11:0a656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.612 [2024-07-15 19:01:39.981871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.612 [2024-07-15 19:01:39.981931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.612 [2024-07-15 19:01:39.981944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.612 #6 NEW cov: 12106 ft: 13311 corp: 5/110b lim: 40 exec/s: 0 rss: 72Mb L: 23/29 MS: 1 CrossOver- 00:06:59.612 [2024-07-15 19:01:40.022234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:095d0a0a cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.612 [2024-07-15 19:01:40.022260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.612 [2024-07-15 19:01:40.022326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0a656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.612 [2024-07-15 19:01:40.022341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.612 [2024-07-15 19:01:40.022403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.612 [2024-07-15 19:01:40.022416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.612 [2024-07-15 19:01:40.022476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.612 [2024-07-15 19:01:40.022490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.870 #10 NEW cov: 12106 ft: 13868 corp: 6/142b lim: 40 exec/s: 0 rss: 72Mb L: 32/32 MS: 4 InsertByte-InsertByte-ChangeByte-CrossOver- 00:06:59.870 [2024-07-15 19:01:40.062241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a6565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.870 [2024-07-15 19:01:40.062274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.870 [2024-07-15 19:01:40.062350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.870 [2024-07-15 19:01:40.062365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.870 [2024-07-15 19:01:40.062424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.870 [2024-07-15 19:01:40.062438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.870 #11 NEW cov: 12106 ft: 13938 corp: 7/171b lim: 40 exec/s: 0 rss: 72Mb L: 29/32 MS: 1 ChangeByte- 00:06:59.870 [2024-07-15 19:01:40.112371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a6565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.870 [2024-07-15 19:01:40.112398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.870 [2024-07-15 19:01:40.112476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.870 [2024-07-15 19:01:40.112490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.870 [2024-07-15 19:01:40.112548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.870 [2024-07-15 19:01:40.112562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.870 #12 NEW cov: 12106 ft: 14039 corp: 8/201b lim: 40 exec/s: 0 rss: 72Mb L: 30/32 MS: 1 InsertByte- 00:06:59.870 [2024-07-15 19:01:40.152476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:280a6565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.870 [2024-07-15 19:01:40.152502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.871 [2024-07-15 19:01:40.152578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.871 [2024-07-15 19:01:40.152592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.871 [2024-07-15 19:01:40.152650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.871 [2024-07-15 19:01:40.152666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.871 #13 NEW cov: 12106 ft: 14091 corp: 9/230b lim: 40 exec/s: 0 rss: 72Mb L: 29/32 MS: 1 ChangeByte- 00:06:59.871 [2024-07-15 19:01:40.192560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a6565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.871 [2024-07-15 19:01:40.192586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.871 [2024-07-15 19:01:40.192645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.871 [2024-07-15 19:01:40.192659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.871 [2024-07-15 19:01:40.192717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.871 [2024-07-15 19:01:40.192731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.871 #14 NEW cov: 12106 ft: 14155 corp: 10/260b lim: 40 exec/s: 0 rss: 72Mb L: 30/32 MS: 1 CopyPart- 00:06:59.871 [2024-07-15 19:01:40.242720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:280a6565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.871 [2024-07-15 19:01:40.242745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.871 [2024-07-15 19:01:40.242804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:6565e565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.871 [2024-07-15 19:01:40.242818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.871 [2024-07-15 19:01:40.242876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.871 [2024-07-15 19:01:40.242889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.871 #15 NEW cov: 12106 ft: 14235 corp: 11/290b lim: 40 exec/s: 0 rss: 73Mb L: 30/32 MS: 1 CopyPart- 00:06:59.871 [2024-07-15 19:01:40.293012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:095d0a0a cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.871 [2024-07-15 19:01:40.293036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.871 [2024-07-15 19:01:40.293111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0a656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.871 [2024-07-15 19:01:40.293125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.871 [2024-07-15 19:01:40.293184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.871 [2024-07-15 19:01:40.293197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.871 [2024-07-15 19:01:40.293263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.871 [2024-07-15 19:01:40.293276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.157 #21 NEW cov: 12106 ft: 14270 corp: 12/322b lim: 40 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:00.157 [2024-07-15 19:01:40.343026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a6565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.157 [2024-07-15 19:01:40.343054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.157 [2024-07-15 19:01:40.343114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.157 [2024-07-15 19:01:40.343128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.157 [2024-07-15 19:01:40.343188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.157 [2024-07-15 19:01:40.343201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.157 #22 NEW cov: 12106 ft: 14305 corp: 13/351b lim: 40 exec/s: 0 rss: 73Mb L: 29/32 MS: 1 ShuffleBytes- 00:07:00.157 [2024-07-15 19:01:40.382846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0a65 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.157 [2024-07-15 19:01:40.382871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.157 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:00.157 #23 NEW cov: 12129 ft: 14681 corp: 14/360b lim: 40 exec/s: 0 rss: 73Mb L: 9/32 MS: 1 CrossOver- 00:07:00.157 [2024-07-15 19:01:40.433257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a6565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.157 [2024-07-15 19:01:40.433282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.157 [2024-07-15 19:01:40.433362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.157 [2024-07-15 19:01:40.433377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.157 [2024-07-15 19:01:40.433437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:65646565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.157 [2024-07-15 19:01:40.433451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.157 #24 NEW cov: 12129 ft: 14693 corp: 15/389b lim: 40 exec/s: 0 rss: 73Mb L: 29/32 MS: 1 ChangeBinInt- 00:07:00.157 [2024-07-15 19:01:40.473524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:095d0a0a cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.157 [2024-07-15 19:01:40.473551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.157 [2024-07-15 19:01:40.473613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0a656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.157 [2024-07-15 19:01:40.473627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.157 [2024-07-15 19:01:40.473686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.157 [2024-07-15 19:01:40.473699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.157 [2024-07-15 19:01:40.473759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:65656565 cdw11:28656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.157 [2024-07-15 19:01:40.473775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.157 #25 NEW cov: 12129 ft: 14752 corp: 16/422b lim: 40 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 InsertByte- 00:07:00.157 [2024-07-15 19:01:40.523382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.157 [2024-07-15 19:01:40.523408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.157 [2024-07-15 19:01:40.523471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.157 [2024-07-15 19:01:40.523484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.157 #26 NEW cov: 12129 ft: 14794 corp: 17/441b lim: 40 exec/s: 26 rss: 73Mb L: 19/33 MS: 1 EraseBytes- 00:07:00.157 [2024-07-15 19:01:40.563624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.157 [2024-07-15 19:01:40.563650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.157 [2024-07-15 19:01:40.563713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.157 [2024-07-15 19:01:40.563727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.157 [2024-07-15 19:01:40.563787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.157 [2024-07-15 19:01:40.563800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.157 #27 NEW cov: 12129 ft: 14809 corp: 18/470b lim: 40 exec/s: 27 rss: 73Mb L: 29/33 MS: 1 InsertByte- 00:07:00.415 [2024-07-15 19:01:40.603714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a6565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.416 [2024-07-15 19:01:40.603740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.416 [2024-07-15 19:01:40.603819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.416 [2024-07-15 19:01:40.603833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.416 [2024-07-15 19:01:40.603891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:656565d9 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.416 [2024-07-15 19:01:40.603904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.416 #28 NEW cov: 12129 ft: 14819 corp: 19/500b lim: 40 exec/s: 28 rss: 73Mb L: 30/33 MS: 1 InsertByte- 00:07:00.416 [2024-07-15 19:01:40.643835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.416 [2024-07-15 19:01:40.643861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.416 [2024-07-15 19:01:40.643940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.416 [2024-07-15 19:01:40.643955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.416 [2024-07-15 19:01:40.644014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:65659265 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.416 [2024-07-15 19:01:40.644030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.416 #29 NEW cov: 12129 ft: 14871 corp: 20/529b lim: 40 exec/s: 29 rss: 73Mb L: 29/33 MS: 1 ChangeBinInt- 00:07:00.416 [2024-07-15 19:01:40.693865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.416 [2024-07-15 19:01:40.693891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.416 [2024-07-15 19:01:40.693954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:65650a0a cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.416 [2024-07-15 19:01:40.693968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.416 #30 NEW cov: 12129 ft: 14919 corp: 21/547b lim: 40 exec/s: 30 rss: 73Mb L: 18/33 MS: 1 CopyPart- 00:07:00.416 [2024-07-15 19:01:40.744119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a6565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.416 [2024-07-15 19:01:40.744145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.416 [2024-07-15 19:01:40.744208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.416 [2024-07-15 19:01:40.744233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.416 [2024-07-15 19:01:40.744298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.416 [2024-07-15 19:01:40.744313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.416 #31 NEW cov: 12129 ft: 15015 corp: 22/576b lim: 40 exec/s: 31 rss: 73Mb L: 29/33 MS: 1 ShuffleBytes- 00:07:00.416 [2024-07-15 19:01:40.784366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:280a6565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.416 [2024-07-15 19:01:40.784392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.416 [2024-07-15 19:01:40.784470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:6565e565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.416 [2024-07-15 19:01:40.784483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.416 [2024-07-15 19:01:40.784544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:6565650a cdw11:0a656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.416 [2024-07-15 19:01:40.784558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.416 [2024-07-15 19:01:40.784616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.416 [2024-07-15 19:01:40.784629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.416 #32 NEW cov: 12129 ft: 15027 corp: 23/614b lim: 40 exec/s: 32 rss: 73Mb L: 38/38 MS: 1 CrossOver- 00:07:00.416 [2024-07-15 19:01:40.834385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a6565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.416 [2024-07-15 19:01:40.834411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.416 [2024-07-15 19:01:40.834474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:65656566 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.416 [2024-07-15 19:01:40.834488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.416 [2024-07-15 19:01:40.834548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:656565d9 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.416 [2024-07-15 19:01:40.834560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.675 #33 NEW cov: 12129 ft: 15065 corp: 24/644b lim: 40 exec/s: 33 rss: 73Mb L: 30/38 MS: 1 ChangeBinInt- 00:07:00.675 [2024-07-15 19:01:40.884641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:280a6565 cdw11:40656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.675 [2024-07-15 19:01:40.884666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.675 [2024-07-15 19:01:40.884728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:6565e565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.675 [2024-07-15 19:01:40.884742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.675 [2024-07-15 19:01:40.884805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:6565650a cdw11:0a656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.675 [2024-07-15 19:01:40.884819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.675 [2024-07-15 19:01:40.884880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.675 [2024-07-15 19:01:40.884894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.675 #34 NEW cov: 12129 ft: 15098 corp: 25/682b lim: 40 exec/s: 34 rss: 73Mb L: 38/38 MS: 1 ChangeByte- 00:07:00.675 [2024-07-15 19:01:40.934630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a6565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.675 [2024-07-15 19:01:40.934655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.675 [2024-07-15 19:01:40.934733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.675 [2024-07-15 19:01:40.934747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.675 [2024-07-15 19:01:40.934811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:45656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.675 [2024-07-15 19:01:40.934824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.675 #35 NEW cov: 12129 ft: 15113 corp: 26/711b lim: 40 exec/s: 35 rss: 73Mb L: 29/38 MS: 1 ChangeBit- 00:07:00.675 [2024-07-15 19:01:40.984763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a6565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.675 [2024-07-15 19:01:40.984787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.675 [2024-07-15 19:01:40.984848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.675 [2024-07-15 19:01:40.984865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.675 [2024-07-15 19:01:40.984942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.675 [2024-07-15 19:01:40.984956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.675 #36 NEW cov: 12129 ft: 15138 corp: 27/741b lim: 40 exec/s: 36 rss: 74Mb L: 30/38 MS: 1 ShuffleBytes- 00:07:00.675 [2024-07-15 19:01:41.034930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a6565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.675 [2024-07-15 19:01:41.034954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.675 [2024-07-15 19:01:41.035030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:65646565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.675 [2024-07-15 19:01:41.035044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.675 [2024-07-15 19:01:41.035104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.675 [2024-07-15 19:01:41.035117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.675 #37 NEW cov: 12129 ft: 15141 corp: 28/766b lim: 40 exec/s: 37 rss: 74Mb L: 25/38 MS: 1 EraseBytes- 00:07:00.675 [2024-07-15 19:01:41.085034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affff7e cdw11:ca281039 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.675 [2024-07-15 19:01:41.085059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.675 [2024-07-15 19:01:41.085141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cb656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.675 [2024-07-15 19:01:41.085155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.675 [2024-07-15 19:01:41.085221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.675 [2024-07-15 19:01:41.085235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.934 #38 NEW cov: 12129 ft: 15153 corp: 29/793b lim: 40 exec/s: 38 rss: 74Mb L: 27/38 MS: 1 CMP- DE: "\377\377~\312(\0209\313"- 00:07:00.934 [2024-07-15 19:01:41.135185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a6565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.934 [2024-07-15 19:01:41.135211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.934 [2024-07-15 19:01:41.135290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.934 [2024-07-15 19:01:41.135304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.934 [2024-07-15 19:01:41.135367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:45655b65 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.934 [2024-07-15 19:01:41.135381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.934 #39 NEW cov: 12129 ft: 15168 corp: 30/822b lim: 40 exec/s: 39 rss: 74Mb L: 29/38 MS: 1 ChangeBinInt- 00:07:00.934 [2024-07-15 19:01:41.185365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a6565 cdw11:6565e565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.934 [2024-07-15 19:01:41.185391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.934 [2024-07-15 19:01:41.185451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.934 [2024-07-15 19:01:41.185465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.934 [2024-07-15 19:01:41.185527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.934 [2024-07-15 19:01:41.185540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.934 #40 NEW cov: 12129 ft: 15206 corp: 31/851b lim: 40 exec/s: 40 rss: 74Mb L: 29/38 MS: 1 ChangeBit- 00:07:00.934 [2024-07-15 19:01:41.225542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:095deef5 cdw11:9a9a9a9a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.934 [2024-07-15 19:01:41.225568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.934 [2024-07-15 19:01:41.225628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:f59a6565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.934 [2024-07-15 19:01:41.225641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.934 [2024-07-15 19:01:41.225704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.934 [2024-07-15 19:01:41.225716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.934 [2024-07-15 19:01:41.225777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:65656565 cdw11:28656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.934 [2024-07-15 19:01:41.225789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.934 #41 NEW cov: 12129 ft: 15261 corp: 32/884b lim: 40 exec/s: 41 rss: 74Mb L: 33/38 MS: 1 ChangeBinInt- 00:07:00.934 [2024-07-15 19:01:41.275588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.934 [2024-07-15 19:01:41.275612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.935 [2024-07-15 19:01:41.275691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:6f656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.935 [2024-07-15 19:01:41.275705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.935 [2024-07-15 19:01:41.275764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.935 [2024-07-15 19:01:41.275778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.935 #42 NEW cov: 12129 ft: 15292 corp: 33/912b lim: 40 exec/s: 42 rss: 74Mb L: 28/38 MS: 1 ChangeBinInt- 00:07:00.935 [2024-07-15 19:01:41.315695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a6565 cdw11:65656545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.935 [2024-07-15 19:01:41.315720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.935 [2024-07-15 19:01:41.315799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.935 [2024-07-15 19:01:41.315813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.935 [2024-07-15 19:01:41.315874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:45656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.935 [2024-07-15 19:01:41.315888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.935 #43 NEW cov: 12129 ft: 15302 corp: 34/941b lim: 40 exec/s: 43 rss: 74Mb L: 29/38 MS: 1 ChangeBit- 00:07:00.935 [2024-07-15 19:01:41.355681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a6565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.935 [2024-07-15 19:01:41.355706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.935 [2024-07-15 19:01:41.355767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:6565652a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.935 [2024-07-15 19:01:41.355781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.210 #44 NEW cov: 12129 ft: 15308 corp: 35/959b lim: 40 exec/s: 44 rss: 74Mb L: 18/38 MS: 1 EraseBytes- 00:07:01.210 [2024-07-15 19:01:41.406086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:095deef5 cdw11:9a9a9a9a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.210 [2024-07-15 19:01:41.406111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.210 [2024-07-15 19:01:41.406175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:f59a6565 cdw11:6565cfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.210 [2024-07-15 19:01:41.406189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.210 [2024-07-15 19:01:41.406254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cfcfcf65 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.210 [2024-07-15 19:01:41.406269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.210 [2024-07-15 19:01:41.406328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.210 [2024-07-15 19:01:41.406342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.210 #45 NEW cov: 12129 ft: 15318 corp: 36/997b lim: 40 exec/s: 45 rss: 74Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:07:01.210 [2024-07-15 19:01:41.456222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a6565 cdw11:65656588 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.210 [2024-07-15 19:01:41.456247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.210 [2024-07-15 19:01:41.456311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:88886565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.210 [2024-07-15 19:01:41.456325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.210 [2024-07-15 19:01:41.456385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.210 [2024-07-15 19:01:41.456400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.210 [2024-07-15 19:01:41.456461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.210 [2024-07-15 19:01:41.456474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.210 #46 NEW cov: 12129 ft: 15341 corp: 37/1029b lim: 40 exec/s: 46 rss: 74Mb L: 32/38 MS: 1 InsertRepeatedBytes- 00:07:01.210 [2024-07-15 19:01:41.506478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:095d0a0a cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.210 [2024-07-15 19:01:41.506503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.210 [2024-07-15 19:01:41.506581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0a656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.210 [2024-07-15 19:01:41.506595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.210 [2024-07-15 19:01:41.506656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:6565ffff cdw11:7eca2810 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.210 [2024-07-15 19:01:41.506669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.210 [2024-07-15 19:01:41.506727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:39cb6565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.210 [2024-07-15 19:01:41.506741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.210 [2024-07-15 19:01:41.506801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.210 [2024-07-15 19:01:41.506815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:01.210 #47 NEW cov: 12129 ft: 15386 corp: 38/1069b lim: 40 exec/s: 23 rss: 74Mb L: 40/40 MS: 1 PersAutoDict- DE: "\377\377~\312(\0209\313"- 00:07:01.210 #47 DONE cov: 12129 ft: 15386 corp: 38/1069b lim: 40 exec/s: 23 rss: 74Mb 00:07:01.210 ###### Recommended dictionary. ###### 00:07:01.210 "\377\377~\312(\0209\313" # Uses: 1 00:07:01.210 ###### End of recommended dictionary. ###### 00:07:01.210 Done 47 runs in 2 second(s) 00:07:01.469 19:01:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:07:01.469 19:01:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:01.469 19:01:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:01.469 19:01:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:07:01.469 19:01:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:07:01.469 19:01:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:01.469 19:01:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:01.469 19:01:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:01.469 19:01:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:07:01.469 19:01:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:01.469 19:01:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:01.469 19:01:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:07:01.469 19:01:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:07:01.469 19:01:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:01.469 19:01:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:07:01.469 19:01:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:01.469 19:01:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:01.469 19:01:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:01.469 19:01:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:07:01.469 [2024-07-15 19:01:41.708182] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:01.469 [2024-07-15 19:01:41.708254] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid672945 ] 00:07:01.469 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.727 [2024-07-15 19:01:41.926913] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.728 [2024-07-15 19:01:41.998407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.728 [2024-07-15 19:01:42.058126] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:01.728 [2024-07-15 19:01:42.074426] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:07:01.728 INFO: Running with entropic power schedule (0xFF, 100). 00:07:01.728 INFO: Seed: 1125174325 00:07:01.728 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:01.728 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:01.728 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:01.728 INFO: A corpus is not provided, starting from an empty corpus 00:07:01.728 #2 INITED exec/s: 0 rss: 65Mb 00:07:01.728 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:01.728 This may also happen if the target rejected all inputs we tried so far 00:07:01.728 [2024-07-15 19:01:42.122101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.728 [2024-07-15 19:01:42.122136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.245 NEW_FUNC[1/696]: 0x492a60 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:07:02.245 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:02.245 #3 NEW cov: 11897 ft: 11898 corp: 2/10b lim: 40 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:07:02.245 [2024-07-15 19:01:42.493039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.245 [2024-07-15 19:01:42.493084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.245 #4 NEW cov: 12027 ft: 12361 corp: 3/19b lim: 40 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:07:02.245 [2024-07-15 19:01:42.573081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.245 [2024-07-15 19:01:42.573113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.245 #6 NEW cov: 12033 ft: 12713 corp: 4/29b lim: 40 exec/s: 0 rss: 72Mb L: 10/10 MS: 2 ShuffleBytes-CrossOver- 00:07:02.245 [2024-07-15 19:01:42.623275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.245 [2024-07-15 19:01:42.623306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.245 [2024-07-15 19:01:42.623355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.245 [2024-07-15 19:01:42.623371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.504 #7 NEW cov: 12118 ft: 13608 corp: 5/47b lim: 40 exec/s: 0 rss: 72Mb L: 18/18 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:07:02.504 [2024-07-15 19:01:42.703486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0000ff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.504 [2024-07-15 19:01:42.703516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.504 [2024-07-15 19:01:42.703565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.504 [2024-07-15 19:01:42.703581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.504 #8 NEW cov: 12118 ft: 13664 corp: 6/68b lim: 40 exec/s: 0 rss: 72Mb L: 21/21 MS: 1 InsertRepeatedBytes- 00:07:02.504 [2024-07-15 19:01:42.783646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.504 [2024-07-15 19:01:42.783677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.504 #9 NEW cov: 12118 ft: 13723 corp: 7/77b lim: 40 exec/s: 0 rss: 73Mb L: 9/21 MS: 1 ChangeBit- 00:07:02.504 [2024-07-15 19:01:42.863898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.504 [2024-07-15 19:01:42.863928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.504 [2024-07-15 19:01:42.863976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:80000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.504 [2024-07-15 19:01:42.863992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.504 #10 NEW cov: 12118 ft: 13791 corp: 8/95b lim: 40 exec/s: 0 rss: 73Mb L: 18/21 MS: 1 CrossOver- 00:07:02.504 [2024-07-15 19:01:42.924074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.504 [2024-07-15 19:01:42.924104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.504 [2024-07-15 19:01:42.924138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:80000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.504 [2024-07-15 19:01:42.924154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.763 #11 NEW cov: 12118 ft: 13844 corp: 9/113b lim: 40 exec/s: 0 rss: 73Mb L: 18/21 MS: 1 ShuffleBytes- 00:07:02.763 [2024-07-15 19:01:43.004246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.763 [2024-07-15 19:01:43.004275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.763 [2024-07-15 19:01:43.004324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:80000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.763 [2024-07-15 19:01:43.004344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.763 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:02.763 #12 NEW cov: 12135 ft: 13891 corp: 10/129b lim: 40 exec/s: 0 rss: 73Mb L: 16/21 MS: 1 EraseBytes- 00:07:02.763 [2024-07-15 19:01:43.064392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000300 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.763 [2024-07-15 19:01:43.064422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.763 [2024-07-15 19:01:43.064456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.763 [2024-07-15 19:01:43.064471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.763 #13 NEW cov: 12135 ft: 13921 corp: 11/147b lim: 40 exec/s: 0 rss: 73Mb L: 18/21 MS: 1 CMP- DE: "\003\000\000\000"- 00:07:02.763 [2024-07-15 19:01:43.114437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ff030000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.763 [2024-07-15 19:01:43.114466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.763 #15 NEW cov: 12135 ft: 13960 corp: 12/156b lim: 40 exec/s: 15 rss: 73Mb L: 9/21 MS: 2 ChangeByte-CMP- DE: "\377\003\000\000\000\000\000\000"- 00:07:02.763 [2024-07-15 19:01:43.164625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000300 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.763 [2024-07-15 19:01:43.164657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.763 [2024-07-15 19:01:43.164708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.763 [2024-07-15 19:01:43.164726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.021 #16 NEW cov: 12135 ft: 13992 corp: 13/174b lim: 40 exec/s: 16 rss: 73Mb L: 18/21 MS: 1 CopyPart- 00:07:03.021 [2024-07-15 19:01:43.244906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:0000fc00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.021 [2024-07-15 19:01:43.244937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.021 [2024-07-15 19:01:43.244971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.021 [2024-07-15 19:01:43.244986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.021 #17 NEW cov: 12135 ft: 14041 corp: 14/192b lim: 40 exec/s: 17 rss: 73Mb L: 18/21 MS: 1 ChangeBinInt- 00:07:03.021 [2024-07-15 19:01:43.305042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000300 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.021 [2024-07-15 19:01:43.305075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.021 [2024-07-15 19:01:43.305108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.021 [2024-07-15 19:01:43.305124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.021 #18 NEW cov: 12135 ft: 14095 corp: 15/210b lim: 40 exec/s: 18 rss: 73Mb L: 18/21 MS: 1 ChangeBit- 00:07:03.021 [2024-07-15 19:01:43.365135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.021 [2024-07-15 19:01:43.365166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.021 [2024-07-15 19:01:43.365215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.021 [2024-07-15 19:01:43.365240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.021 #19 NEW cov: 12135 ft: 14106 corp: 16/229b lim: 40 exec/s: 19 rss: 73Mb L: 19/21 MS: 1 CrossOver- 00:07:03.021 [2024-07-15 19:01:43.415237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:0a000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.021 [2024-07-15 19:01:43.415268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.279 #20 NEW cov: 12135 ft: 14116 corp: 17/239b lim: 40 exec/s: 20 rss: 73Mb L: 10/21 MS: 1 CrossOver- 00:07:03.279 [2024-07-15 19:01:43.475455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.279 [2024-07-15 19:01:43.475488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.279 [2024-07-15 19:01:43.475521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00800000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.279 [2024-07-15 19:01:43.475537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.279 #21 NEW cov: 12135 ft: 14149 corp: 18/255b lim: 40 exec/s: 21 rss: 73Mb L: 16/21 MS: 1 CopyPart- 00:07:03.279 [2024-07-15 19:01:43.555658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:0000ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.279 [2024-07-15 19:01:43.555690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.279 [2024-07-15 19:01:43.555740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:fff50000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.279 [2024-07-15 19:01:43.555757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.279 #22 NEW cov: 12135 ft: 14161 corp: 19/273b lim: 40 exec/s: 22 rss: 73Mb L: 18/21 MS: 1 CMP- DE: "\377\377\377\365"- 00:07:03.280 [2024-07-15 19:01:43.605781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ae10000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.280 [2024-07-15 19:01:43.605812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.280 [2024-07-15 19:01:43.605861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.280 [2024-07-15 19:01:43.605876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.280 #23 NEW cov: 12135 ft: 14176 corp: 20/292b lim: 40 exec/s: 23 rss: 73Mb L: 19/21 MS: 1 InsertByte- 00:07:03.280 [2024-07-15 19:01:43.655907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000100 cdw11:0a000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.280 [2024-07-15 19:01:43.655939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.538 #24 NEW cov: 12135 ft: 14271 corp: 21/302b lim: 40 exec/s: 24 rss: 73Mb L: 10/21 MS: 1 ChangeBit- 00:07:03.538 [2024-07-15 19:01:43.736251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ae10000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.538 [2024-07-15 19:01:43.736284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.538 [2024-07-15 19:01:43.736319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.538 [2024-07-15 19:01:43.736334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.538 [2024-07-15 19:01:43.736364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.538 [2024-07-15 19:01:43.736380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.538 #25 NEW cov: 12135 ft: 14532 corp: 22/332b lim: 40 exec/s: 25 rss: 73Mb L: 30/30 MS: 1 CopyPart- 00:07:03.538 [2024-07-15 19:01:43.816392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffff0100 cdw11:0000ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.538 [2024-07-15 19:01:43.816425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.538 [2024-07-15 19:01:43.816459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:fff50000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.538 [2024-07-15 19:01:43.816476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.538 #26 NEW cov: 12135 ft: 14550 corp: 23/350b lim: 40 exec/s: 26 rss: 73Mb L: 18/30 MS: 1 CMP- DE: "\377\377\001\000"- 00:07:03.538 [2024-07-15 19:01:43.896535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.538 [2024-07-15 19:01:43.896568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.538 #27 NEW cov: 12135 ft: 14555 corp: 24/363b lim: 40 exec/s: 27 rss: 74Mb L: 13/30 MS: 1 EraseBytes- 00:07:03.797 [2024-07-15 19:01:43.976709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.797 [2024-07-15 19:01:43.976739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.797 #28 NEW cov: 12142 ft: 14608 corp: 25/373b lim: 40 exec/s: 28 rss: 74Mb L: 10/30 MS: 1 CrossOver- 00:07:03.797 [2024-07-15 19:01:44.026890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.797 [2024-07-15 19:01:44.026921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.797 [2024-07-15 19:01:44.026955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0080fe00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.797 [2024-07-15 19:01:44.026971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.797 #34 NEW cov: 12142 ft: 14650 corp: 26/389b lim: 40 exec/s: 34 rss: 74Mb L: 16/30 MS: 1 ChangeBinInt- 00:07:03.797 [2024-07-15 19:01:44.107070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:0000ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.797 [2024-07-15 19:01:44.107100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.797 [2024-07-15 19:01:44.107149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:fff50000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.797 [2024-07-15 19:01:44.107169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.797 #35 NEW cov: 12142 ft: 14672 corp: 27/407b lim: 40 exec/s: 17 rss: 74Mb L: 18/30 MS: 1 ChangeByte- 00:07:03.797 #35 DONE cov: 12142 ft: 14672 corp: 27/407b lim: 40 exec/s: 17 rss: 74Mb 00:07:03.797 ###### Recommended dictionary. ###### 00:07:03.797 "\000\000\000\000\000\000\000\000" # Uses: 2 00:07:03.797 "\003\000\000\000" # Uses: 0 00:07:03.797 "\377\003\000\000\000\000\000\000" # Uses: 0 00:07:03.797 "\377\377\377\365" # Uses: 1 00:07:03.797 "\377\377\001\000" # Uses: 0 00:07:03.797 ###### End of recommended dictionary. ###### 00:07:03.797 Done 35 runs in 2 second(s) 00:07:04.057 19:01:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:07:04.057 19:01:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:04.057 19:01:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:04.057 19:01:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:07:04.057 19:01:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:07:04.057 19:01:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:04.057 19:01:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:04.057 19:01:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:04.057 19:01:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:07:04.057 19:01:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:04.057 19:01:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:04.057 19:01:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:07:04.057 19:01:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:07:04.057 19:01:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:04.057 19:01:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:07:04.057 19:01:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:04.057 19:01:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:04.057 19:01:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:04.057 19:01:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:07:04.057 [2024-07-15 19:01:44.317799] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:04.057 [2024-07-15 19:01:44.317875] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid673314 ] 00:07:04.057 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.329 [2024-07-15 19:01:44.529321] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.329 [2024-07-15 19:01:44.599540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.329 [2024-07-15 19:01:44.659180] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:04.329 [2024-07-15 19:01:44.675504] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:07:04.329 INFO: Running with entropic power schedule (0xFF, 100). 00:07:04.329 INFO: Seed: 3728147917 00:07:04.329 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:04.329 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:04.329 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:04.329 INFO: A corpus is not provided, starting from an empty corpus 00:07:04.329 #2 INITED exec/s: 0 rss: 65Mb 00:07:04.329 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:04.329 This may also happen if the target rejected all inputs we tried so far 00:07:04.329 [2024-07-15 19:01:44.753326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.330 [2024-07-15 19:01:44.753369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.330 [2024-07-15 19:01:44.753474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.330 [2024-07-15 19:01:44.753492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.849 NEW_FUNC[1/696]: 0x4947d0 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:07:04.849 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:04.849 #18 NEW cov: 11895 ft: 11884 corp: 2/22b lim: 40 exec/s: 0 rss: 72Mb L: 21/21 MS: 1 InsertRepeatedBytes- 00:07:04.849 [2024-07-15 19:01:45.113614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.849 [2024-07-15 19:01:45.113664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.850 [2024-07-15 19:01:45.113777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.850 [2024-07-15 19:01:45.113799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.850 #19 NEW cov: 12025 ft: 12510 corp: 3/43b lim: 40 exec/s: 0 rss: 73Mb L: 21/21 MS: 1 ShuffleBytes- 00:07:04.850 [2024-07-15 19:01:45.183894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a3d0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.850 [2024-07-15 19:01:45.183923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.850 [2024-07-15 19:01:45.184023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.850 [2024-07-15 19:01:45.184040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.850 #20 NEW cov: 12031 ft: 12822 corp: 4/64b lim: 40 exec/s: 0 rss: 73Mb L: 21/21 MS: 1 ChangeByte- 00:07:04.850 [2024-07-15 19:01:45.244309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.850 [2024-07-15 19:01:45.244337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.850 [2024-07-15 19:01:45.244429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.850 [2024-07-15 19:01:45.244445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.850 #22 NEW cov: 12116 ft: 13093 corp: 5/81b lim: 40 exec/s: 0 rss: 73Mb L: 17/21 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:05.108 [2024-07-15 19:01:45.294605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a3d0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.108 [2024-07-15 19:01:45.294636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.108 [2024-07-15 19:01:45.294745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0bf30b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.108 [2024-07-15 19:01:45.294761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.108 #23 NEW cov: 12116 ft: 13246 corp: 6/102b lim: 40 exec/s: 0 rss: 73Mb L: 21/21 MS: 1 ChangeBinInt- 00:07:05.108 [2024-07-15 19:01:45.355028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0b0b0b cdw11:0b2b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.108 [2024-07-15 19:01:45.355055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.108 [2024-07-15 19:01:45.355155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.108 [2024-07-15 19:01:45.355170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.108 #24 NEW cov: 12116 ft: 13302 corp: 7/123b lim: 40 exec/s: 0 rss: 73Mb L: 21/21 MS: 1 ChangeBit- 00:07:05.108 [2024-07-15 19:01:45.405421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.108 [2024-07-15 19:01:45.405447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.108 [2024-07-15 19:01:45.405549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.108 [2024-07-15 19:01:45.405565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.108 #25 NEW cov: 12116 ft: 13399 corp: 8/140b lim: 40 exec/s: 0 rss: 73Mb L: 17/21 MS: 1 ShuffleBytes- 00:07:05.108 [2024-07-15 19:01:45.465732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.108 [2024-07-15 19:01:45.465759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.108 [2024-07-15 19:01:45.465848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.108 [2024-07-15 19:01:45.465864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.108 #26 NEW cov: 12116 ft: 13432 corp: 9/157b lim: 40 exec/s: 0 rss: 73Mb L: 17/21 MS: 1 ShuffleBytes- 00:07:05.108 [2024-07-15 19:01:45.516044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.108 [2024-07-15 19:01:45.516070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.108 [2024-07-15 19:01:45.516172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.108 [2024-07-15 19:01:45.516188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.108 #27 NEW cov: 12116 ft: 13455 corp: 10/179b lim: 40 exec/s: 0 rss: 73Mb L: 22/22 MS: 1 InsertByte- 00:07:05.367 [2024-07-15 19:01:45.566479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.367 [2024-07-15 19:01:45.566507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.367 [2024-07-15 19:01:45.566603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.367 [2024-07-15 19:01:45.566618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.367 #28 NEW cov: 12116 ft: 13496 corp: 11/200b lim: 40 exec/s: 0 rss: 73Mb L: 21/22 MS: 1 CrossOver- 00:07:05.367 [2024-07-15 19:01:45.616870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a3d0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.367 [2024-07-15 19:01:45.616898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.367 [2024-07-15 19:01:45.616996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.367 [2024-07-15 19:01:45.617013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.367 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:05.367 #29 NEW cov: 12139 ft: 13529 corp: 12/221b lim: 40 exec/s: 0 rss: 73Mb L: 21/22 MS: 1 ShuffleBytes- 00:07:05.367 [2024-07-15 19:01:45.667243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0b0b0b cdw11:0b2b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.367 [2024-07-15 19:01:45.667271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.367 [2024-07-15 19:01:45.667374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.367 [2024-07-15 19:01:45.667391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.367 #30 NEW cov: 12139 ft: 13549 corp: 13/242b lim: 40 exec/s: 0 rss: 73Mb L: 21/22 MS: 1 ShuffleBytes- 00:07:05.367 [2024-07-15 19:01:45.727444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.367 [2024-07-15 19:01:45.727472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.367 [2024-07-15 19:01:45.727579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.367 [2024-07-15 19:01:45.727597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.367 #31 NEW cov: 12139 ft: 13558 corp: 14/263b lim: 40 exec/s: 31 rss: 73Mb L: 21/22 MS: 1 CopyPart- 00:07:05.367 [2024-07-15 19:01:45.778119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.367 [2024-07-15 19:01:45.778146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.367 [2024-07-15 19:01:45.778235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.367 [2024-07-15 19:01:45.778265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.367 [2024-07-15 19:01:45.778365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0b0b0b0b cdw11:0bff1238 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.367 [2024-07-15 19:01:45.778382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.625 #32 NEW cov: 12139 ft: 13848 corp: 15/292b lim: 40 exec/s: 32 rss: 73Mb L: 29/29 MS: 1 CMP- DE: "\377\0228b8\330\231\006"- 00:07:05.625 [2024-07-15 19:01:45.828119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.625 [2024-07-15 19:01:45.828145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.625 [2024-07-15 19:01:45.828245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0b0b0b0b cdw11:f5f4f4ed SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.625 [2024-07-15 19:01:45.828261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.625 #33 NEW cov: 12139 ft: 13874 corp: 16/313b lim: 40 exec/s: 33 rss: 73Mb L: 21/29 MS: 1 ChangeBinInt- 00:07:05.625 [2024-07-15 19:01:45.878378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:000a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.625 [2024-07-15 19:01:45.878404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.625 [2024-07-15 19:01:45.878499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.625 [2024-07-15 19:01:45.878516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.625 #34 NEW cov: 12139 ft: 13907 corp: 17/330b lim: 40 exec/s: 34 rss: 73Mb L: 17/29 MS: 1 ChangeBinInt- 00:07:05.625 [2024-07-15 19:01:45.928120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0b0b0b cdw11:0b2b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.625 [2024-07-15 19:01:45.928146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.625 #35 NEW cov: 12139 ft: 14633 corp: 18/344b lim: 40 exec/s: 35 rss: 73Mb L: 14/29 MS: 1 EraseBytes- 00:07:05.625 [2024-07-15 19:01:45.979073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a3d0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.625 [2024-07-15 19:01:45.979099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.625 [2024-07-15 19:01:45.979194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0bff050b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.625 [2024-07-15 19:01:45.979210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.625 #36 NEW cov: 12139 ft: 14649 corp: 19/365b lim: 40 exec/s: 36 rss: 73Mb L: 21/29 MS: 1 CMP- DE: "\377\005"- 00:07:05.625 [2024-07-15 19:01:46.039343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.625 [2024-07-15 19:01:46.039377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.625 [2024-07-15 19:01:46.039470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0b0bff05 cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.625 [2024-07-15 19:01:46.039489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.883 #37 NEW cov: 12139 ft: 14660 corp: 20/386b lim: 40 exec/s: 37 rss: 73Mb L: 21/29 MS: 1 PersAutoDict- DE: "\377\005"- 00:07:05.883 [2024-07-15 19:01:46.100575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.883 [2024-07-15 19:01:46.100604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.883 [2024-07-15 19:01:46.100697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0b0b0b0b cdw11:0b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.883 [2024-07-15 19:01:46.100718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.883 [2024-07-15 19:01:46.100812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.883 [2024-07-15 19:01:46.100828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.883 [2024-07-15 19:01:46.100924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.883 [2024-07-15 19:01:46.100942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.883 #38 NEW cov: 12139 ft: 15005 corp: 21/424b lim: 40 exec/s: 38 rss: 73Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:07:05.883 [2024-07-15 19:01:46.150170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0b0b0b cdw11:f4d4f4f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.883 [2024-07-15 19:01:46.150199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.883 [2024-07-15 19:01:46.150289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.883 [2024-07-15 19:01:46.150306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.883 #39 NEW cov: 12139 ft: 15014 corp: 22/445b lim: 40 exec/s: 39 rss: 74Mb L: 21/38 MS: 1 ChangeBinInt- 00:07:05.883 [2024-07-15 19:01:46.210405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0b000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.883 [2024-07-15 19:01:46.210436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.883 [2024-07-15 19:01:46.210523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.883 [2024-07-15 19:01:46.210541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.883 #42 NEW cov: 12139 ft: 15036 corp: 23/461b lim: 40 exec/s: 42 rss: 74Mb L: 16/38 MS: 3 CrossOver-InsertByte-InsertRepeatedBytes- 00:07:05.883 [2024-07-15 19:01:46.260637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0b0b0b cdw11:f4d4f4f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.883 [2024-07-15 19:01:46.260667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.883 [2024-07-15 19:01:46.260768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.883 [2024-07-15 19:01:46.260785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.883 #43 NEW cov: 12139 ft: 15055 corp: 24/483b lim: 40 exec/s: 43 rss: 74Mb L: 22/38 MS: 1 InsertByte- 00:07:06.141 [2024-07-15 19:01:46.320871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.141 [2024-07-15 19:01:46.320900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.141 [2024-07-15 19:01:46.321003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b68 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.141 [2024-07-15 19:01:46.321021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.141 #44 NEW cov: 12139 ft: 15069 corp: 25/505b lim: 40 exec/s: 44 rss: 74Mb L: 22/38 MS: 1 InsertByte- 00:07:06.141 [2024-07-15 19:01:46.371501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0b0b0b cdw11:0bffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.141 [2024-07-15 19:01:46.371529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.141 [2024-07-15 19:01:46.371620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffff2b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.141 [2024-07-15 19:01:46.371639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.141 [2024-07-15 19:01:46.371734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.141 [2024-07-15 19:01:46.371749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.141 #45 NEW cov: 12139 ft: 15111 corp: 26/532b lim: 40 exec/s: 45 rss: 74Mb L: 27/38 MS: 1 InsertRepeatedBytes- 00:07:06.141 [2024-07-15 19:01:46.421163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a013100 cdw11:000a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.141 [2024-07-15 19:01:46.421190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.141 [2024-07-15 19:01:46.421301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.141 [2024-07-15 19:01:46.421319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.141 #46 NEW cov: 12139 ft: 15126 corp: 27/549b lim: 40 exec/s: 46 rss: 74Mb L: 17/38 MS: 1 CMP- DE: "\0011"- 00:07:06.141 [2024-07-15 19:01:46.481905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0b0b01 cdw11:310b0bff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.141 [2024-07-15 19:01:46.481932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.141 [2024-07-15 19:01:46.482033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff2b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.141 [2024-07-15 19:01:46.482061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.141 [2024-07-15 19:01:46.482161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.141 [2024-07-15 19:01:46.482178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.141 #47 NEW cov: 12139 ft: 15168 corp: 28/578b lim: 40 exec/s: 47 rss: 74Mb L: 29/38 MS: 1 PersAutoDict- DE: "\0011"- 00:07:06.141 [2024-07-15 19:01:46.542041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a3d0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.141 [2024-07-15 19:01:46.542067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.141 [2024-07-15 19:01:46.542162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0b0b0bc3 cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.141 [2024-07-15 19:01:46.542179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.141 #48 NEW cov: 12139 ft: 15177 corp: 29/600b lim: 40 exec/s: 48 rss: 74Mb L: 22/38 MS: 1 InsertByte- 00:07:06.400 [2024-07-15 19:01:46.592720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.400 [2024-07-15 19:01:46.592747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.400 [2024-07-15 19:01:46.592848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.400 [2024-07-15 19:01:46.592866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.400 [2024-07-15 19:01:46.592949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0b0b0b0b cdw11:0b444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.400 [2024-07-15 19:01:46.592966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.400 [2024-07-15 19:01:46.593054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:44444444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.400 [2024-07-15 19:01:46.593071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.400 #49 NEW cov: 12139 ft: 15179 corp: 30/637b lim: 40 exec/s: 49 rss: 74Mb L: 37/38 MS: 1 InsertRepeatedBytes- 00:07:06.400 [2024-07-15 19:01:46.642594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.400 [2024-07-15 19:01:46.642620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.400 [2024-07-15 19:01:46.642720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ff050b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.400 [2024-07-15 19:01:46.642737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.400 #50 NEW cov: 12139 ft: 15180 corp: 31/658b lim: 40 exec/s: 50 rss: 74Mb L: 21/38 MS: 1 PersAutoDict- DE: "\377\005"- 00:07:06.401 [2024-07-15 19:01:46.692688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a3d0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.401 [2024-07-15 19:01:46.692714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.401 #51 NEW cov: 12139 ft: 15188 corp: 32/671b lim: 40 exec/s: 25 rss: 74Mb L: 13/38 MS: 1 EraseBytes- 00:07:06.401 #51 DONE cov: 12139 ft: 15188 corp: 32/671b lim: 40 exec/s: 25 rss: 74Mb 00:07:06.401 ###### Recommended dictionary. ###### 00:07:06.401 "\377\0228b8\330\231\006" # Uses: 0 00:07:06.401 "\377\005" # Uses: 2 00:07:06.401 "\0011" # Uses: 1 00:07:06.401 ###### End of recommended dictionary. ###### 00:07:06.401 Done 51 runs in 2 second(s) 00:07:06.659 19:01:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:07:06.659 19:01:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:06.659 19:01:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:06.659 19:01:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:07:06.659 19:01:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:07:06.659 19:01:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:06.659 19:01:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:06.659 19:01:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:06.659 19:01:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:07:06.659 19:01:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:06.659 19:01:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:06.659 19:01:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:07:06.659 19:01:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:07:06.659 19:01:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:06.659 19:01:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:07:06.659 19:01:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:06.659 19:01:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:06.659 19:01:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:06.659 19:01:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:07:06.659 [2024-07-15 19:01:46.895981] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:06.660 [2024-07-15 19:01:46.896051] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid673682 ] 00:07:06.660 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.918 [2024-07-15 19:01:47.104018] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.918 [2024-07-15 19:01:47.174681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.918 [2024-07-15 19:01:47.234532] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.918 [2024-07-15 19:01:47.250828] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:07:06.918 INFO: Running with entropic power schedule (0xFF, 100). 00:07:06.918 INFO: Seed: 2009179428 00:07:06.918 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:06.918 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:06.918 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:06.918 INFO: A corpus is not provided, starting from an empty corpus 00:07:06.918 #2 INITED exec/s: 0 rss: 65Mb 00:07:06.918 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:06.918 This may also happen if the target rejected all inputs we tried so far 00:07:06.918 [2024-07-15 19:01:47.316539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.918 [2024-07-15 19:01:47.316570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.918 [2024-07-15 19:01:47.316642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.918 [2024-07-15 19:01:47.316656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.918 [2024-07-15 19:01:47.316709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.918 [2024-07-15 19:01:47.316723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.918 [2024-07-15 19:01:47.316775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.918 [2024-07-15 19:01:47.316789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.918 [2024-07-15 19:01:47.316844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:78787878 cdw11:7878780a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.918 [2024-07-15 19:01:47.316857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:07.436 NEW_FUNC[1/695]: 0x496390 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:07:07.436 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:07.436 #14 NEW cov: 11877 ft: 11875 corp: 2/41b lim: 40 exec/s: 0 rss: 72Mb L: 40/40 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:07.436 [2024-07-15 19:01:47.658018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.436 [2024-07-15 19:01:47.658105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.436 [2024-07-15 19:01:47.658247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.436 [2024-07-15 19:01:47.658289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.436 [2024-07-15 19:01:47.658401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:18787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.436 [2024-07-15 19:01:47.658440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.436 [2024-07-15 19:01:47.658552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.436 [2024-07-15 19:01:47.658590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.436 [2024-07-15 19:01:47.658705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:78787878 cdw11:7878780a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.436 [2024-07-15 19:01:47.658744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:07.436 #20 NEW cov: 12013 ft: 12411 corp: 3/81b lim: 40 exec/s: 0 rss: 72Mb L: 40/40 MS: 1 ChangeByte- 00:07:07.436 [2024-07-15 19:01:47.716954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a7878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.436 [2024-07-15 19:01:47.716981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.436 #23 NEW cov: 12019 ft: 13436 corp: 4/94b lim: 40 exec/s: 0 rss: 72Mb L: 13/40 MS: 3 ShuffleBytes-CopyPart-CrossOver- 00:07:07.436 [2024-07-15 19:01:47.757539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.436 [2024-07-15 19:01:47.757566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.436 [2024-07-15 19:01:47.757623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.436 [2024-07-15 19:01:47.757637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.436 [2024-07-15 19:01:47.757690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:58787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.436 [2024-07-15 19:01:47.757703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.436 [2024-07-15 19:01:47.757760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.436 [2024-07-15 19:01:47.757773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.436 [2024-07-15 19:01:47.757826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:78787878 cdw11:7878780a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.436 [2024-07-15 19:01:47.757839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:07.436 #24 NEW cov: 12104 ft: 13690 corp: 5/134b lim: 40 exec/s: 0 rss: 72Mb L: 40/40 MS: 1 ChangeBit- 00:07:07.436 [2024-07-15 19:01:47.807214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2e49ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.436 [2024-07-15 19:01:47.807244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.436 #29 NEW cov: 12104 ft: 13830 corp: 6/146b lim: 40 exec/s: 0 rss: 72Mb L: 12/40 MS: 5 CrossOver-ShuffleBytes-ChangeByte-InsertByte-InsertRepeatedBytes- 00:07:07.436 [2024-07-15 19:01:47.847297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2e49ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.436 [2024-07-15 19:01:47.847323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.695 #30 NEW cov: 12104 ft: 13897 corp: 7/160b lim: 40 exec/s: 0 rss: 72Mb L: 14/40 MS: 1 CrossOver- 00:07:07.695 [2024-07-15 19:01:47.897443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a001338 cdw11:68689201 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.695 [2024-07-15 19:01:47.897470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.695 #31 NEW cov: 12104 ft: 13942 corp: 8/169b lim: 40 exec/s: 0 rss: 72Mb L: 9/40 MS: 1 CMP- DE: "\000\0238hh\222\0016"- 00:07:07.695 [2024-07-15 19:01:47.937570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:13000a38 cdw11:68689201 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.695 [2024-07-15 19:01:47.937594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.695 #32 NEW cov: 12104 ft: 14059 corp: 9/178b lim: 40 exec/s: 0 rss: 72Mb L: 9/40 MS: 1 ShuffleBytes- 00:07:07.695 [2024-07-15 19:01:47.987689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:282e49ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.695 [2024-07-15 19:01:47.987714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.695 #33 NEW cov: 12104 ft: 14077 corp: 10/191b lim: 40 exec/s: 0 rss: 72Mb L: 13/40 MS: 1 InsertByte- 00:07:07.695 [2024-07-15 19:01:48.027769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a001338 cdw11:68689201 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.695 [2024-07-15 19:01:48.027794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.695 #34 NEW cov: 12104 ft: 14187 corp: 11/201b lim: 40 exec/s: 0 rss: 72Mb L: 10/40 MS: 1 CopyPart- 00:07:07.695 [2024-07-15 19:01:48.068385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:78787878 cdw11:78001338 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.695 [2024-07-15 19:01:48.068410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.695 [2024-07-15 19:01:48.068482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:68689201 cdw11:36787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.695 [2024-07-15 19:01:48.068499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.695 [2024-07-15 19:01:48.068553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:58787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.695 [2024-07-15 19:01:48.068566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.695 [2024-07-15 19:01:48.068621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.695 [2024-07-15 19:01:48.068634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.695 [2024-07-15 19:01:48.068690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:78787878 cdw11:7878780a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.695 [2024-07-15 19:01:48.068703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:07.695 #35 NEW cov: 12104 ft: 14216 corp: 12/241b lim: 40 exec/s: 0 rss: 72Mb L: 40/40 MS: 1 PersAutoDict- DE: "\000\0238hh\222\0016"- 00:07:07.695 [2024-07-15 19:01:48.118068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:102e49ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.695 [2024-07-15 19:01:48.118092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.954 #37 NEW cov: 12104 ft: 14229 corp: 13/249b lim: 40 exec/s: 0 rss: 72Mb L: 8/40 MS: 2 EraseBytes-InsertByte- 00:07:07.954 [2024-07-15 19:01:48.158618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.954 [2024-07-15 19:01:48.158643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.954 [2024-07-15 19:01:48.158700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.954 [2024-07-15 19:01:48.158713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.954 [2024-07-15 19:01:48.158782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.954 [2024-07-15 19:01:48.158795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.954 [2024-07-15 19:01:48.158849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.954 [2024-07-15 19:01:48.158862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.954 [2024-07-15 19:01:48.158915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:78017878 cdw11:7878780a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.954 [2024-07-15 19:01:48.158928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:07.954 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:07.954 #38 NEW cov: 12127 ft: 14250 corp: 14/289b lim: 40 exec/s: 0 rss: 72Mb L: 40/40 MS: 1 ChangeByte- 00:07:07.954 [2024-07-15 19:01:48.198745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.954 [2024-07-15 19:01:48.198773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.954 [2024-07-15 19:01:48.198830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.954 [2024-07-15 19:01:48.198843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.954 [2024-07-15 19:01:48.198911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.954 [2024-07-15 19:01:48.198925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.954 [2024-07-15 19:01:48.198982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.954 [2024-07-15 19:01:48.198995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.954 [2024-07-15 19:01:48.199048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:78787878 cdw11:7878780a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.954 [2024-07-15 19:01:48.199061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:07.954 #39 NEW cov: 12127 ft: 14297 corp: 15/329b lim: 40 exec/s: 0 rss: 72Mb L: 40/40 MS: 1 ShuffleBytes- 00:07:07.954 [2024-07-15 19:01:48.238391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:13000a38 cdw11:68689201 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.954 [2024-07-15 19:01:48.238415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.954 #40 NEW cov: 12127 ft: 14312 corp: 16/338b lim: 40 exec/s: 0 rss: 72Mb L: 9/40 MS: 1 ChangeASCIIInt- 00:07:07.954 [2024-07-15 19:01:48.288491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a001338 cdw11:68689201 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.954 [2024-07-15 19:01:48.288517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.954 #41 NEW cov: 12127 ft: 14336 corp: 17/347b lim: 40 exec/s: 41 rss: 72Mb L: 9/40 MS: 1 ChangeASCIIInt- 00:07:07.954 [2024-07-15 19:01:48.328767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.954 [2024-07-15 19:01:48.328791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.954 [2024-07-15 19:01:48.328844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.954 [2024-07-15 19:01:48.328858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.954 #42 NEW cov: 12127 ft: 14555 corp: 18/368b lim: 40 exec/s: 42 rss: 73Mb L: 21/40 MS: 1 EraseBytes- 00:07:07.954 [2024-07-15 19:01:48.378818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a7819 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.954 [2024-07-15 19:01:48.378843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.213 #43 NEW cov: 12127 ft: 14595 corp: 19/382b lim: 40 exec/s: 43 rss: 73Mb L: 14/40 MS: 1 InsertByte- 00:07:08.213 [2024-07-15 19:01:48.428887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a7819 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.213 [2024-07-15 19:01:48.428912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.213 #44 NEW cov: 12127 ft: 14603 corp: 20/396b lim: 40 exec/s: 44 rss: 73Mb L: 14/40 MS: 1 ChangeByte- 00:07:08.213 [2024-07-15 19:01:48.479032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:282e49ff cdw11:ff000dff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.213 [2024-07-15 19:01:48.479055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.213 #45 NEW cov: 12127 ft: 14667 corp: 21/409b lim: 40 exec/s: 45 rss: 73Mb L: 13/40 MS: 1 ChangeBinInt- 00:07:08.213 [2024-07-15 19:01:48.529177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2e49ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.213 [2024-07-15 19:01:48.529202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.213 #46 NEW cov: 12127 ft: 14713 corp: 22/421b lim: 40 exec/s: 46 rss: 73Mb L: 12/40 MS: 1 CrossOver- 00:07:08.213 [2024-07-15 19:01:48.569291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.213 [2024-07-15 19:01:48.569316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.213 #47 NEW cov: 12127 ft: 14743 corp: 23/432b lim: 40 exec/s: 47 rss: 73Mb L: 11/40 MS: 1 EraseBytes- 00:07:08.213 [2024-07-15 19:01:48.619874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.213 [2024-07-15 19:01:48.619901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.213 [2024-07-15 19:01:48.619954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.213 [2024-07-15 19:01:48.619968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.213 [2024-07-15 19:01:48.620020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:58787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.213 [2024-07-15 19:01:48.620033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.213 [2024-07-15 19:01:48.620086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.213 [2024-07-15 19:01:48.620098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.213 [2024-07-15 19:01:48.620151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:78587878 cdw11:7878780a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.213 [2024-07-15 19:01:48.620164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:08.213 #48 NEW cov: 12127 ft: 14755 corp: 24/472b lim: 40 exec/s: 48 rss: 73Mb L: 40/40 MS: 1 ChangeBit- 00:07:08.472 [2024-07-15 19:01:48.660025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.472 [2024-07-15 19:01:48.660050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.472 [2024-07-15 19:01:48.660108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.472 [2024-07-15 19:01:48.660121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.472 [2024-07-15 19:01:48.660180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:58787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.472 [2024-07-15 19:01:48.660193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.472 [2024-07-15 19:01:48.660254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.472 [2024-07-15 19:01:48.660267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.472 [2024-07-15 19:01:48.660321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:78787878 cdw11:7878780a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.472 [2024-07-15 19:01:48.660334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:08.472 #49 NEW cov: 12127 ft: 14776 corp: 25/512b lim: 40 exec/s: 49 rss: 73Mb L: 40/40 MS: 1 ShuffleBytes- 00:07:08.472 [2024-07-15 19:01:48.700136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.472 [2024-07-15 19:01:48.700160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.472 [2024-07-15 19:01:48.700238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:78787878 cdw11:78780f78 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.472 [2024-07-15 19:01:48.700252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.472 [2024-07-15 19:01:48.700307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:18787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.472 [2024-07-15 19:01:48.700320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.472 [2024-07-15 19:01:48.700374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.472 [2024-07-15 19:01:48.700387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.472 [2024-07-15 19:01:48.700445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:78787878 cdw11:7878780a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.472 [2024-07-15 19:01:48.700459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:08.472 #50 NEW cov: 12127 ft: 14850 corp: 26/552b lim: 40 exec/s: 50 rss: 73Mb L: 40/40 MS: 1 ChangeByte- 00:07:08.472 [2024-07-15 19:01:48.739777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a001338 cdw11:68689201 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.472 [2024-07-15 19:01:48.739802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.472 #51 NEW cov: 12127 ft: 14869 corp: 27/561b lim: 40 exec/s: 51 rss: 73Mb L: 9/40 MS: 1 ChangeASCIIInt- 00:07:08.472 [2024-07-15 19:01:48.780334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:78787878 cdw11:78001338 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.472 [2024-07-15 19:01:48.780360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.472 [2024-07-15 19:01:48.780416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:68689201 cdw11:36787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.472 [2024-07-15 19:01:48.780430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.472 [2024-07-15 19:01:48.780488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:58787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.472 [2024-07-15 19:01:48.780500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.472 [2024-07-15 19:01:48.780554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:78787878 cdw11:85787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.472 [2024-07-15 19:01:48.780567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.473 [2024-07-15 19:01:48.780619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:78787878 cdw11:7878780a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.473 [2024-07-15 19:01:48.780631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:08.473 #52 NEW cov: 12127 ft: 14905 corp: 28/601b lim: 40 exec/s: 52 rss: 73Mb L: 40/40 MS: 1 ChangeBinInt- 00:07:08.473 [2024-07-15 19:01:48.830004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:28282e49 cdw11:ffff000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.473 [2024-07-15 19:01:48.830030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.473 #53 NEW cov: 12127 ft: 14911 corp: 29/614b lim: 40 exec/s: 53 rss: 73Mb L: 13/40 MS: 1 CrossOver- 00:07:08.473 [2024-07-15 19:01:48.870631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:78787878 cdw11:78001338 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.473 [2024-07-15 19:01:48.870657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.473 [2024-07-15 19:01:48.870713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:68a26d01 cdw11:36787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.473 [2024-07-15 19:01:48.870727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.473 [2024-07-15 19:01:48.870781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:58787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.473 [2024-07-15 19:01:48.870794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.473 [2024-07-15 19:01:48.870847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.473 [2024-07-15 19:01:48.870861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.473 [2024-07-15 19:01:48.870916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:78787878 cdw11:7878780a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.473 [2024-07-15 19:01:48.870929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:08.473 #54 NEW cov: 12127 ft: 14918 corp: 30/654b lim: 40 exec/s: 54 rss: 73Mb L: 40/40 MS: 1 ChangeBinInt- 00:07:08.732 [2024-07-15 19:01:48.910778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.732 [2024-07-15 19:01:48.910805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.732 [2024-07-15 19:01:48.910863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.732 [2024-07-15 19:01:48.910880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.732 [2024-07-15 19:01:48.910937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.732 [2024-07-15 19:01:48.910951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.732 [2024-07-15 19:01:48.911005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.732 [2024-07-15 19:01:48.911018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.732 [2024-07-15 19:01:48.911076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:78787878 cdw11:7878780a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.732 [2024-07-15 19:01:48.911089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:08.732 #55 NEW cov: 12127 ft: 14927 corp: 31/694b lim: 40 exec/s: 55 rss: 73Mb L: 40/40 MS: 1 ShuffleBytes- 00:07:08.732 [2024-07-15 19:01:48.950364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8ac8f6eb cdw11:63381300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.732 [2024-07-15 19:01:48.950389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.732 #56 NEW cov: 12127 ft: 14948 corp: 32/708b lim: 40 exec/s: 56 rss: 73Mb L: 14/40 MS: 1 CMP- DE: "\212\310\366\353c8\023\000"- 00:07:08.732 [2024-07-15 19:01:49.000489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a7819 cdw11:ec787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.732 [2024-07-15 19:01:49.000515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.732 #57 NEW cov: 12127 ft: 14951 corp: 33/722b lim: 40 exec/s: 57 rss: 73Mb L: 14/40 MS: 1 ChangeByte- 00:07:08.732 [2024-07-15 19:01:49.040620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:282e49ff cdw11:13386868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.732 [2024-07-15 19:01:49.040645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.732 #58 NEW cov: 12127 ft: 14959 corp: 34/735b lim: 40 exec/s: 58 rss: 73Mb L: 13/40 MS: 1 CrossOver- 00:07:08.732 [2024-07-15 19:01:49.090769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a00132e cdw11:68689201 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.732 [2024-07-15 19:01:49.090795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.732 #59 NEW cov: 12127 ft: 14960 corp: 35/744b lim: 40 exec/s: 59 rss: 73Mb L: 9/40 MS: 1 ChangeByte- 00:07:08.732 [2024-07-15 19:01:49.141365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.732 [2024-07-15 19:01:49.141391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.732 [2024-07-15 19:01:49.141447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.732 [2024-07-15 19:01:49.141461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.732 [2024-07-15 19:01:49.141515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:58787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.732 [2024-07-15 19:01:49.141528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.732 [2024-07-15 19:01:49.141585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:7c787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.732 [2024-07-15 19:01:49.141598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.732 [2024-07-15 19:01:49.141652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:78787878 cdw11:7878780a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.732 [2024-07-15 19:01:49.141664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:08.732 #60 NEW cov: 12127 ft: 14965 corp: 36/784b lim: 40 exec/s: 60 rss: 73Mb L: 40/40 MS: 1 ChangeBit- 00:07:08.991 [2024-07-15 19:01:49.181489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7828282e cdw11:49ffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.991 [2024-07-15 19:01:49.181514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.991 [2024-07-15 19:01:49.181567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0dffffff cdw11:ffff7878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.991 [2024-07-15 19:01:49.181580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.991 [2024-07-15 19:01:49.181635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:58787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.991 [2024-07-15 19:01:49.181648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.991 [2024-07-15 19:01:49.181704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.991 [2024-07-15 19:01:49.181716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.991 [2024-07-15 19:01:49.181770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:78787878 cdw11:7878780a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.991 [2024-07-15 19:01:49.181783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:08.991 #61 NEW cov: 12127 ft: 14995 corp: 37/824b lim: 40 exec/s: 61 rss: 73Mb L: 40/40 MS: 1 CrossOver- 00:07:08.991 [2024-07-15 19:01:49.221104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:13000a38 cdw11:6868922f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.991 [2024-07-15 19:01:49.221128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.991 #62 NEW cov: 12127 ft: 15001 corp: 38/833b lim: 40 exec/s: 62 rss: 73Mb L: 9/40 MS: 1 ChangeByte- 00:07:08.991 [2024-07-15 19:01:49.271379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00133868 cdw11:68920136 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.991 [2024-07-15 19:01:49.271404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.991 [2024-07-15 19:01:49.271476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:282e49ff cdw11:ff000dff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.991 [2024-07-15 19:01:49.271490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.991 #63 NEW cov: 12127 ft: 15005 corp: 39/854b lim: 40 exec/s: 63 rss: 73Mb L: 21/40 MS: 1 PersAutoDict- DE: "\000\0238hh\222\0016"- 00:07:08.991 [2024-07-15 19:01:49.311348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a7878 cdw11:70787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.991 [2024-07-15 19:01:49.311378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.991 #64 pulse cov: 12127 ft: 15015 corp: 39/854b lim: 40 exec/s: 32 rss: 73Mb 00:07:08.991 #64 NEW cov: 12127 ft: 15015 corp: 40/867b lim: 40 exec/s: 32 rss: 73Mb L: 13/40 MS: 1 ChangeBit- 00:07:08.991 #64 DONE cov: 12127 ft: 15015 corp: 40/867b lim: 40 exec/s: 32 rss: 73Mb 00:07:08.991 ###### Recommended dictionary. ###### 00:07:08.991 "\000\0238hh\222\0016" # Uses: 2 00:07:08.991 "\212\310\366\353c8\023\000" # Uses: 0 00:07:08.991 ###### End of recommended dictionary. ###### 00:07:08.991 Done 64 runs in 2 second(s) 00:07:09.250 19:01:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:07:09.250 19:01:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:09.250 19:01:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:09.250 19:01:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:07:09.250 19:01:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:07:09.250 19:01:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:09.250 19:01:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:09.250 19:01:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:09.250 19:01:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:07:09.250 19:01:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:09.250 19:01:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:09.250 19:01:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:07:09.250 19:01:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:07:09.250 19:01:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:09.250 19:01:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:07:09.250 19:01:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:09.250 19:01:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:09.250 19:01:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:09.250 19:01:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:07:09.250 [2024-07-15 19:01:49.512499] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:09.250 [2024-07-15 19:01:49.512581] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid674053 ] 00:07:09.250 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.509 [2024-07-15 19:01:49.724896] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.509 [2024-07-15 19:01:49.794817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.509 [2024-07-15 19:01:49.854259] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:09.509 [2024-07-15 19:01:49.870564] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:07:09.509 INFO: Running with entropic power schedule (0xFF, 100). 00:07:09.509 INFO: Seed: 334247927 00:07:09.509 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:09.509 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:09.509 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:09.509 INFO: A corpus is not provided, starting from an empty corpus 00:07:09.509 #2 INITED exec/s: 0 rss: 65Mb 00:07:09.509 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:09.509 This may also happen if the target rejected all inputs we tried so far 00:07:09.509 [2024-07-15 19:01:49.925411] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.509 [2024-07-15 19:01:49.925446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.509 [2024-07-15 19:01:49.925481] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.509 [2024-07-15 19:01:49.925496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.509 [2024-07-15 19:01:49.925526] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.509 [2024-07-15 19:01:49.925542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.027 NEW_FUNC[1/696]: 0x497f50 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:07:10.027 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:10.027 #16 NEW cov: 11872 ft: 11877 corp: 2/28b lim: 35 exec/s: 0 rss: 72Mb L: 27/27 MS: 4 CrossOver-CopyPart-CopyPart-InsertRepeatedBytes- 00:07:10.027 [2024-07-15 19:01:50.306500] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.027 [2024-07-15 19:01:50.306549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.027 [2024-07-15 19:01:50.306585] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.027 [2024-07-15 19:01:50.306601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.027 [2024-07-15 19:01:50.306632] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.027 [2024-07-15 19:01:50.306647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.027 [2024-07-15 19:01:50.306677] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.027 [2024-07-15 19:01:50.306694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.027 [2024-07-15 19:01:50.306740] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.027 [2024-07-15 19:01:50.306758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:10.027 #17 NEW cov: 12007 ft: 12810 corp: 3/63b lim: 35 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 CMP- DE: "\000\004\000\000\000\000\000\000"- 00:07:10.027 [2024-07-15 19:01:50.396612] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.027 [2024-07-15 19:01:50.396649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.027 [2024-07-15 19:01:50.396699] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.027 [2024-07-15 19:01:50.396716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.027 [2024-07-15 19:01:50.396751] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.027 [2024-07-15 19:01:50.396767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.027 [2024-07-15 19:01:50.396798] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.027 [2024-07-15 19:01:50.396813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.027 [2024-07-15 19:01:50.396844] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.027 [2024-07-15 19:01:50.396859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:10.286 #18 NEW cov: 12013 ft: 13038 corp: 4/98b lim: 35 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 CopyPart- 00:07:10.286 [2024-07-15 19:01:50.476821] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.286 [2024-07-15 19:01:50.476856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.286 [2024-07-15 19:01:50.476890] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.286 [2024-07-15 19:01:50.476906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.286 [2024-07-15 19:01:50.476937] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.286 [2024-07-15 19:01:50.476954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.286 [2024-07-15 19:01:50.476984] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.286 [2024-07-15 19:01:50.476999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.286 [2024-07-15 19:01:50.477029] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.286 [2024-07-15 19:01:50.477044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:10.286 #19 NEW cov: 12098 ft: 13303 corp: 5/133b lim: 35 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 CrossOver- 00:07:10.286 [2024-07-15 19:01:50.536732] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.286 [2024-07-15 19:01:50.536764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.286 [2024-07-15 19:01:50.536799] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.286 [2024-07-15 19:01:50.536815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.286 #20 NEW cov: 12098 ft: 13705 corp: 6/153b lim: 35 exec/s: 0 rss: 73Mb L: 20/35 MS: 1 EraseBytes- 00:07:10.286 [2024-07-15 19:01:50.617176] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.286 [2024-07-15 19:01:50.617207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.286 [2024-07-15 19:01:50.617250] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.286 [2024-07-15 19:01:50.617274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.286 [2024-07-15 19:01:50.617305] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.286 [2024-07-15 19:01:50.617320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.286 [2024-07-15 19:01:50.617350] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.286 [2024-07-15 19:01:50.617365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.286 [2024-07-15 19:01:50.617395] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.286 [2024-07-15 19:01:50.617410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:10.286 #21 NEW cov: 12098 ft: 13818 corp: 7/188b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 CrossOver- 00:07:10.286 [2024-07-15 19:01:50.667227] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.286 [2024-07-15 19:01:50.667272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.286 [2024-07-15 19:01:50.667307] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.286 [2024-07-15 19:01:50.667323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.286 [2024-07-15 19:01:50.667353] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.286 [2024-07-15 19:01:50.667369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.286 [2024-07-15 19:01:50.667398] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.286 [2024-07-15 19:01:50.667413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.286 [2024-07-15 19:01:50.667443] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.286 [2024-07-15 19:01:50.667458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:10.286 #22 NEW cov: 12098 ft: 13857 corp: 8/223b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 PersAutoDict- DE: "\000\004\000\000\000\000\000\000"- 00:07:10.545 [2024-07-15 19:01:50.727185] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.545 [2024-07-15 19:01:50.727214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.545 [2024-07-15 19:01:50.727270] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.545 [2024-07-15 19:01:50.727286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.545 #23 NEW cov: 12098 ft: 13885 corp: 9/243b lim: 35 exec/s: 0 rss: 73Mb L: 20/35 MS: 1 ChangeBit- 00:07:10.545 [2024-07-15 19:01:50.807633] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.545 [2024-07-15 19:01:50.807663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.545 [2024-07-15 19:01:50.807698] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.545 [2024-07-15 19:01:50.807718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.545 [2024-07-15 19:01:50.807748] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.545 [2024-07-15 19:01:50.807763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.545 [2024-07-15 19:01:50.807793] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.545 [2024-07-15 19:01:50.807809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.545 [2024-07-15 19:01:50.807839] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.545 [2024-07-15 19:01:50.807854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:10.545 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:10.545 #24 NEW cov: 12121 ft: 13969 corp: 10/278b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 ChangeBit- 00:07:10.545 [2024-07-15 19:01:50.867651] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.545 [2024-07-15 19:01:50.867681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.545 [2024-07-15 19:01:50.867729] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.545 [2024-07-15 19:01:50.867745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.545 [2024-07-15 19:01:50.867775] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.545 [2024-07-15 19:01:50.867791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.545 #25 NEW cov: 12121 ft: 13989 corp: 11/299b lim: 35 exec/s: 25 rss: 73Mb L: 21/35 MS: 1 InsertByte- 00:07:10.545 [2024-07-15 19:01:50.917792] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.545 [2024-07-15 19:01:50.917822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.545 [2024-07-15 19:01:50.917856] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.545 [2024-07-15 19:01:50.917872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.545 [2024-07-15 19:01:50.917902] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.546 [2024-07-15 19:01:50.917918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.546 #26 NEW cov: 12121 ft: 14013 corp: 12/326b lim: 35 exec/s: 26 rss: 73Mb L: 27/35 MS: 1 ChangeBit- 00:07:10.804 [2024-07-15 19:01:50.977927] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.804 [2024-07-15 19:01:50.977959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.804 [2024-07-15 19:01:50.977994] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.804 [2024-07-15 19:01:50.978010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.804 #27 NEW cov: 12121 ft: 14089 corp: 13/346b lim: 35 exec/s: 27 rss: 73Mb L: 20/35 MS: 1 CMP- DE: "\001\000\000\006"- 00:07:10.804 [2024-07-15 19:01:51.028105] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.804 [2024-07-15 19:01:51.028134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.804 [2024-07-15 19:01:51.028182] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.804 [2024-07-15 19:01:51.028198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.805 [2024-07-15 19:01:51.028235] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST CONTROLLED THERMAL MANAGEMENT cid:6 cdw10:00000010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.805 [2024-07-15 19:01:51.028251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.805 NEW_FUNC[1/3]: 0x4b4bb0 in feat_temperature_threshold /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:295 00:07:10.805 NEW_FUNC[2/3]: 0x11e5190 in temp_threshold_opts_valid /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1640 00:07:10.805 #28 NEW cov: 12175 ft: 14168 corp: 14/375b lim: 35 exec/s: 28 rss: 73Mb L: 29/35 MS: 1 PersAutoDict- DE: "\000\004\000\000\000\000\000\000"- 00:07:10.805 [2024-07-15 19:01:51.118449] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.805 [2024-07-15 19:01:51.118482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.805 [2024-07-15 19:01:51.118531] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.805 [2024-07-15 19:01:51.118548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.805 [2024-07-15 19:01:51.118579] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.805 [2024-07-15 19:01:51.118595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.805 [2024-07-15 19:01:51.118626] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000060 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.805 [2024-07-15 19:01:51.118642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.805 [2024-07-15 19:01:51.118672] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.805 [2024-07-15 19:01:51.118687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:10.805 #29 NEW cov: 12175 ft: 14247 corp: 15/410b lim: 35 exec/s: 29 rss: 73Mb L: 35/35 MS: 1 ChangeByte- 00:07:10.805 [2024-07-15 19:01:51.168455] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.805 [2024-07-15 19:01:51.168487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.805 [2024-07-15 19:01:51.168522] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.805 [2024-07-15 19:01:51.168537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.805 [2024-07-15 19:01:51.168567] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.805 [2024-07-15 19:01:51.168589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.805 #30 NEW cov: 12182 ft: 14275 corp: 16/431b lim: 35 exec/s: 30 rss: 73Mb L: 21/35 MS: 1 ChangeBinInt- 00:07:10.805 [2024-07-15 19:01:51.228575] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.805 [2024-07-15 19:01:51.228608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.805 [2024-07-15 19:01:51.228642] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.805 [2024-07-15 19:01:51.228658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.063 #31 NEW cov: 12182 ft: 14319 corp: 17/451b lim: 35 exec/s: 31 rss: 73Mb L: 20/35 MS: 1 ChangeBinInt- 00:07:11.063 [2024-07-15 19:01:51.299982] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.063 [2024-07-15 19:01:51.300011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.063 [2024-07-15 19:01:51.300075] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.063 [2024-07-15 19:01:51.300090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.064 [2024-07-15 19:01:51.300149] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.064 [2024-07-15 19:01:51.300163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.064 [2024-07-15 19:01:51.300228] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.064 [2024-07-15 19:01:51.300242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:11.064 [2024-07-15 19:01:51.300304] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:0000005d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.064 [2024-07-15 19:01:51.300317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:11.064 #32 NEW cov: 12182 ft: 14397 corp: 18/486b lim: 35 exec/s: 32 rss: 73Mb L: 35/35 MS: 1 ChangeBit- 00:07:11.064 [2024-07-15 19:01:51.349786] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.064 [2024-07-15 19:01:51.349815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.064 [2024-07-15 19:01:51.349876] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.064 [2024-07-15 19:01:51.349891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.064 [2024-07-15 19:01:51.349949] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.064 [2024-07-15 19:01:51.349963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.064 #33 NEW cov: 12182 ft: 14431 corp: 19/513b lim: 35 exec/s: 33 rss: 73Mb L: 27/35 MS: 1 ShuffleBytes- 00:07:11.064 [2024-07-15 19:01:51.400260] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.064 [2024-07-15 19:01:51.400286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.064 [2024-07-15 19:01:51.400349] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.064 [2024-07-15 19:01:51.400366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.064 [2024-07-15 19:01:51.400430] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.064 [2024-07-15 19:01:51.400444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.064 [2024-07-15 19:01:51.400506] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.064 [2024-07-15 19:01:51.400520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:11.064 [2024-07-15 19:01:51.400580] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.064 [2024-07-15 19:01:51.400594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:11.064 #34 NEW cov: 12182 ft: 14523 corp: 20/548b lim: 35 exec/s: 34 rss: 73Mb L: 35/35 MS: 1 CopyPart- 00:07:11.064 [2024-07-15 19:01:51.450379] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.064 [2024-07-15 19:01:51.450405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.064 [2024-07-15 19:01:51.450467] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.064 [2024-07-15 19:01:51.450481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.064 [2024-07-15 19:01:51.450580] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000060 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.064 [2024-07-15 19:01:51.450593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:11.064 [2024-07-15 19:01:51.450653] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.064 [2024-07-15 19:01:51.450667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:11.064 NEW_FUNC[1/2]: 0x4b28e0 in feat_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:273 00:07:11.064 NEW_FUNC[2/2]: 0x11e8fa0 in nvmf_ctrlr_set_features_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1603 00:07:11.064 #35 NEW cov: 12239 ft: 14621 corp: 21/583b lim: 35 exec/s: 35 rss: 73Mb L: 35/35 MS: 1 PersAutoDict- DE: "\001\000\000\006"- 00:07:11.323 [2024-07-15 19:01:51.500199] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.323 [2024-07-15 19:01:51.500229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.323 [2024-07-15 19:01:51.500310] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.323 [2024-07-15 19:01:51.500325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.323 [2024-07-15 19:01:51.500388] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.323 [2024-07-15 19:01:51.500402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.323 #36 NEW cov: 12239 ft: 14658 corp: 22/610b lim: 35 exec/s: 36 rss: 74Mb L: 27/35 MS: 1 ChangeBinInt- 00:07:11.323 [2024-07-15 19:01:51.550342] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.323 [2024-07-15 19:01:51.550367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.323 [2024-07-15 19:01:51.550444] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.323 [2024-07-15 19:01:51.550458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.323 [2024-07-15 19:01:51.550520] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.323 [2024-07-15 19:01:51.550534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.323 #37 NEW cov: 12239 ft: 14664 corp: 23/631b lim: 35 exec/s: 37 rss: 74Mb L: 21/35 MS: 1 InsertByte- 00:07:11.323 [2024-07-15 19:01:51.600470] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.323 [2024-07-15 19:01:51.600497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.323 [2024-07-15 19:01:51.600574] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.323 [2024-07-15 19:01:51.600589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.323 [2024-07-15 19:01:51.600650] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.323 [2024-07-15 19:01:51.600664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.323 #38 NEW cov: 12239 ft: 14745 corp: 24/658b lim: 35 exec/s: 38 rss: 74Mb L: 27/35 MS: 1 CopyPart- 00:07:11.323 [2024-07-15 19:01:51.640741] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.323 [2024-07-15 19:01:51.640768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.323 [2024-07-15 19:01:51.640844] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.323 [2024-07-15 19:01:51.640858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.323 [2024-07-15 19:01:51.640918] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.323 [2024-07-15 19:01:51.640931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.323 [2024-07-15 19:01:51.640991] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.323 [2024-07-15 19:01:51.641004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:11.324 #39 NEW cov: 12239 ft: 14781 corp: 25/686b lim: 35 exec/s: 39 rss: 74Mb L: 28/35 MS: 1 InsertByte- 00:07:11.324 [2024-07-15 19:01:51.680850] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.324 [2024-07-15 19:01:51.680876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.324 [2024-07-15 19:01:51.680937] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.324 [2024-07-15 19:01:51.680950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.324 [2024-07-15 19:01:51.681028] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.324 [2024-07-15 19:01:51.681043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.324 [2024-07-15 19:01:51.681104] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.324 [2024-07-15 19:01:51.681117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:11.324 #40 NEW cov: 12239 ft: 14793 corp: 26/720b lim: 35 exec/s: 40 rss: 74Mb L: 34/35 MS: 1 EraseBytes- 00:07:11.324 [2024-07-15 19:01:51.731268] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.324 [2024-07-15 19:01:51.731294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.324 [2024-07-15 19:01:51.731357] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.324 [2024-07-15 19:01:51.731371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.324 [2024-07-15 19:01:51.731479] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000060 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.324 [2024-07-15 19:01:51.731493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:11.324 [2024-07-15 19:01:51.731554] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.324 [2024-07-15 19:01:51.731568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:11.583 #41 NEW cov: 12239 ft: 14810 corp: 27/755b lim: 35 exec/s: 41 rss: 74Mb L: 35/35 MS: 1 ChangeByte- 00:07:11.583 [2024-07-15 19:01:51.780948] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.583 [2024-07-15 19:01:51.780974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.583 [2024-07-15 19:01:51.781036] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.583 [2024-07-15 19:01:51.781050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.583 [2024-07-15 19:01:51.781110] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.583 [2024-07-15 19:01:51.781124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.583 #42 NEW cov: 12239 ft: 14823 corp: 28/776b lim: 35 exec/s: 42 rss: 74Mb L: 21/35 MS: 1 ChangeBinInt- 00:07:11.583 [2024-07-15 19:01:51.821273] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.583 [2024-07-15 19:01:51.821299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.583 [2024-07-15 19:01:51.821361] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.583 [2024-07-15 19:01:51.821375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.583 [2024-07-15 19:01:51.821436] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.583 [2024-07-15 19:01:51.821450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.583 [2024-07-15 19:01:51.821510] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.583 [2024-07-15 19:01:51.821524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:11.583 #43 NEW cov: 12239 ft: 14837 corp: 29/804b lim: 35 exec/s: 43 rss: 74Mb L: 28/35 MS: 1 ChangeByte- 00:07:11.583 [2024-07-15 19:01:51.871540] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.583 [2024-07-15 19:01:51.871566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.583 [2024-07-15 19:01:51.871627] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.583 [2024-07-15 19:01:51.871641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.583 [2024-07-15 19:01:51.871702] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.583 [2024-07-15 19:01:51.871715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.583 [2024-07-15 19:01:51.871775] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000005d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.583 [2024-07-15 19:01:51.871788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:11.583 [2024-07-15 19:01:51.871850] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.583 [2024-07-15 19:01:51.871863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:11.583 #44 NEW cov: 12239 ft: 14857 corp: 30/839b lim: 35 exec/s: 44 rss: 74Mb L: 35/35 MS: 1 CopyPart- 00:07:11.583 [2024-07-15 19:01:51.911726] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.583 [2024-07-15 19:01:51.911752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.583 [2024-07-15 19:01:51.911832] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.583 [2024-07-15 19:01:51.911847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.584 [2024-07-15 19:01:51.911908] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.584 [2024-07-15 19:01:51.911922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.584 [2024-07-15 19:01:51.911983] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.584 [2024-07-15 19:01:51.911997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:11.584 [2024-07-15 19:01:51.912056] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.584 [2024-07-15 19:01:51.912070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:11.584 #45 NEW cov: 12239 ft: 14898 corp: 31/874b lim: 35 exec/s: 22 rss: 74Mb L: 35/35 MS: 1 PersAutoDict- DE: "\001\000\000\006"- 00:07:11.584 #45 DONE cov: 12239 ft: 14898 corp: 31/874b lim: 35 exec/s: 22 rss: 74Mb 00:07:11.584 ###### Recommended dictionary. ###### 00:07:11.584 "\000\004\000\000\000\000\000\000" # Uses: 2 00:07:11.584 "\001\000\000\006" # Uses: 2 00:07:11.584 ###### End of recommended dictionary. ###### 00:07:11.584 Done 45 runs in 2 second(s) 00:07:11.850 19:01:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:07:11.850 19:01:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:11.850 19:01:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:11.850 19:01:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:07:11.850 19:01:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:07:11.850 19:01:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:11.850 19:01:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:11.850 19:01:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:11.850 19:01:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:07:11.850 19:01:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:11.850 19:01:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:11.850 19:01:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:07:11.850 19:01:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:07:11.850 19:01:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:11.850 19:01:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:07:11.850 19:01:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:11.851 19:01:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:11.851 19:01:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:11.851 19:01:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:07:11.851 [2024-07-15 19:01:52.105039] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:11.851 [2024-07-15 19:01:52.105107] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid674418 ] 00:07:11.851 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.120 [2024-07-15 19:01:52.313967] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.120 [2024-07-15 19:01:52.389166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.120 [2024-07-15 19:01:52.449124] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:12.120 [2024-07-15 19:01:52.465436] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:07:12.120 INFO: Running with entropic power schedule (0xFF, 100). 00:07:12.120 INFO: Seed: 2928213467 00:07:12.120 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:12.120 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:12.120 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:12.120 INFO: A corpus is not provided, starting from an empty corpus 00:07:12.120 #2 INITED exec/s: 0 rss: 65Mb 00:07:12.120 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:12.120 This may also happen if the target rejected all inputs we tried so far 00:07:12.120 [2024-07-15 19:01:52.520130] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.120 [2024-07-15 19:01:52.520166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.658 NEW_FUNC[1/695]: 0x499490 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:07:12.658 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:12.658 #10 NEW cov: 11865 ft: 11866 corp: 2/9b lim: 35 exec/s: 0 rss: 71Mb L: 8/8 MS: 3 CopyPart-ShuffleBytes-InsertRepeatedBytes- 00:07:12.658 [2024-07-15 19:01:52.891023] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.658 [2024-07-15 19:01:52.891070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.658 #17 NEW cov: 11995 ft: 12438 corp: 3/18b lim: 35 exec/s: 0 rss: 72Mb L: 9/9 MS: 2 ChangeByte-CrossOver- 00:07:12.658 [2024-07-15 19:01:52.951019] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.658 [2024-07-15 19:01:52.951052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.658 #18 NEW cov: 12001 ft: 12823 corp: 4/26b lim: 35 exec/s: 0 rss: 72Mb L: 8/9 MS: 1 ChangeByte- 00:07:12.658 [2024-07-15 19:01:53.031236] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.658 [2024-07-15 19:01:53.031271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.658 #19 NEW cov: 12086 ft: 13099 corp: 5/36b lim: 35 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 CopyPart- 00:07:12.916 [2024-07-15 19:01:53.091451] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.916 [2024-07-15 19:01:53.091484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.916 #20 NEW cov: 12086 ft: 13169 corp: 6/45b lim: 35 exec/s: 0 rss: 72Mb L: 9/10 MS: 1 ChangeBinInt- 00:07:12.916 [2024-07-15 19:01:53.171757] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.916 [2024-07-15 19:01:53.171789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.916 [2024-07-15 19:01:53.171861] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.916 [2024-07-15 19:01:53.171878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.916 [2024-07-15 19:01:53.171909] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.916 [2024-07-15 19:01:53.171924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:12.916 NEW_FUNC[1/1]: 0x4b9410 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:12.916 #21 NEW cov: 12100 ft: 13864 corp: 7/78b lim: 35 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:07:12.916 [2024-07-15 19:01:53.231763] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.916 [2024-07-15 19:01:53.231795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.916 #22 NEW cov: 12100 ft: 13975 corp: 8/88b lim: 35 exec/s: 0 rss: 72Mb L: 10/33 MS: 1 CopyPart- 00:07:12.916 [2024-07-15 19:01:53.281876] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.916 [2024-07-15 19:01:53.281908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.916 #28 NEW cov: 12100 ft: 14003 corp: 9/99b lim: 35 exec/s: 0 rss: 72Mb L: 11/33 MS: 1 InsertByte- 00:07:13.175 [2024-07-15 19:01:53.362085] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.175 [2024-07-15 19:01:53.362115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.175 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:13.175 #29 NEW cov: 12117 ft: 14020 corp: 10/111b lim: 35 exec/s: 0 rss: 72Mb L: 12/33 MS: 1 CrossOver- 00:07:13.175 [2024-07-15 19:01:53.422275] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.175 [2024-07-15 19:01:53.422307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.175 #30 NEW cov: 12117 ft: 14083 corp: 11/119b lim: 35 exec/s: 0 rss: 73Mb L: 8/33 MS: 1 CopyPart- 00:07:13.175 [2024-07-15 19:01:53.472467] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.175 [2024-07-15 19:01:53.472497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.175 [2024-07-15 19:01:53.472531] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.175 [2024-07-15 19:01:53.472547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.175 #31 NEW cov: 12117 ft: 14287 corp: 12/139b lim: 35 exec/s: 31 rss: 73Mb L: 20/33 MS: 1 InsertRepeatedBytes- 00:07:13.175 [2024-07-15 19:01:53.552679] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.175 [2024-07-15 19:01:53.552710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.175 [2024-07-15 19:01:53.552744] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.175 [2024-07-15 19:01:53.552759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.175 #32 NEW cov: 12117 ft: 14304 corp: 13/157b lim: 35 exec/s: 32 rss: 73Mb L: 18/33 MS: 1 CrossOver- 00:07:13.433 [2024-07-15 19:01:53.612776] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.433 [2024-07-15 19:01:53.612807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.433 [2024-07-15 19:01:53.612856] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.433 [2024-07-15 19:01:53.612871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.433 #33 NEW cov: 12117 ft: 14322 corp: 14/175b lim: 35 exec/s: 33 rss: 73Mb L: 18/33 MS: 1 CopyPart- 00:07:13.433 [2024-07-15 19:01:53.672854] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.433 [2024-07-15 19:01:53.672885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.433 #34 NEW cov: 12117 ft: 14384 corp: 15/184b lim: 35 exec/s: 34 rss: 73Mb L: 9/33 MS: 1 InsertByte- 00:07:13.433 [2024-07-15 19:01:53.753116] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.433 [2024-07-15 19:01:53.753146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.433 #35 NEW cov: 12117 ft: 14416 corp: 16/196b lim: 35 exec/s: 35 rss: 73Mb L: 12/33 MS: 1 CopyPart- 00:07:13.433 [2024-07-15 19:01:53.833362] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.433 [2024-07-15 19:01:53.833391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.692 #36 NEW cov: 12117 ft: 14471 corp: 17/210b lim: 35 exec/s: 36 rss: 73Mb L: 14/33 MS: 1 CrossOver- 00:07:13.692 [2024-07-15 19:01:53.883459] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.692 [2024-07-15 19:01:53.883489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.692 #37 NEW cov: 12117 ft: 14498 corp: 18/220b lim: 35 exec/s: 37 rss: 73Mb L: 10/33 MS: 1 InsertByte- 00:07:13.692 [2024-07-15 19:01:53.933636] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000364 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.692 [2024-07-15 19:01:53.933666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.692 [2024-07-15 19:01:53.933700] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000364 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.692 [2024-07-15 19:01:53.933715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.692 #38 NEW cov: 12117 ft: 14510 corp: 19/235b lim: 35 exec/s: 38 rss: 73Mb L: 15/33 MS: 1 InsertRepeatedBytes- 00:07:13.692 [2024-07-15 19:01:53.993706] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.692 [2024-07-15 19:01:53.993736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.692 #39 NEW cov: 12117 ft: 14550 corp: 20/244b lim: 35 exec/s: 39 rss: 73Mb L: 9/33 MS: 1 EraseBytes- 00:07:13.692 [2024-07-15 19:01:54.043802] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.692 [2024-07-15 19:01:54.043831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.692 #40 NEW cov: 12117 ft: 14607 corp: 21/254b lim: 35 exec/s: 40 rss: 73Mb L: 10/33 MS: 1 CrossOver- 00:07:13.950 [2024-07-15 19:01:54.124092] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000364 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.950 [2024-07-15 19:01:54.124123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.950 #41 NEW cov: 12117 ft: 14670 corp: 22/264b lim: 35 exec/s: 41 rss: 73Mb L: 10/33 MS: 1 EraseBytes- 00:07:13.950 [2024-07-15 19:01:54.204297] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.950 [2024-07-15 19:01:54.204326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.950 #47 NEW cov: 12117 ft: 14689 corp: 23/274b lim: 35 exec/s: 47 rss: 73Mb L: 10/33 MS: 1 InsertByte- 00:07:13.950 [2024-07-15 19:01:54.295181] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.950 [2024-07-15 19:01:54.295207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.950 [2024-07-15 19:01:54.295271] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.950 [2024-07-15 19:01:54.295286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.950 #48 NEW cov: 12117 ft: 14773 corp: 24/292b lim: 35 exec/s: 48 rss: 73Mb L: 18/33 MS: 1 ChangeBinInt- 00:07:13.950 [2024-07-15 19:01:54.365238] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.950 [2024-07-15 19:01:54.365265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.209 #49 NEW cov: 12124 ft: 14821 corp: 25/302b lim: 35 exec/s: 49 rss: 73Mb L: 10/33 MS: 1 ChangeBit- 00:07:14.209 [2024-07-15 19:01:54.415370] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.209 [2024-07-15 19:01:54.415397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.209 #50 NEW cov: 12124 ft: 14900 corp: 26/310b lim: 35 exec/s: 50 rss: 73Mb L: 8/33 MS: 1 ChangeBit- 00:07:14.210 [2024-07-15 19:01:54.455710] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.210 [2024-07-15 19:01:54.455736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.210 [2024-07-15 19:01:54.455870] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.210 [2024-07-15 19:01:54.455885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.210 #51 NEW cov: 12124 ft: 14971 corp: 27/335b lim: 35 exec/s: 51 rss: 73Mb L: 25/33 MS: 1 CrossOver- 00:07:14.210 [2024-07-15 19:01:54.495832] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.210 [2024-07-15 19:01:54.495857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.210 [2024-07-15 19:01:54.495915] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.210 [2024-07-15 19:01:54.495929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.210 [2024-07-15 19:01:54.495986] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.210 [2024-07-15 19:01:54.495998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.210 #52 NEW cov: 12124 ft: 14976 corp: 28/357b lim: 35 exec/s: 26 rss: 73Mb L: 22/33 MS: 1 CrossOver- 00:07:14.210 #52 DONE cov: 12124 ft: 14976 corp: 28/357b lim: 35 exec/s: 26 rss: 73Mb 00:07:14.210 Done 52 runs in 2 second(s) 00:07:14.468 19:01:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:07:14.468 19:01:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:14.468 19:01:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:14.468 19:01:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:07:14.468 19:01:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:07:14.468 19:01:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:14.468 19:01:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:14.468 19:01:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:14.468 19:01:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:07:14.468 19:01:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:14.468 19:01:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:14.468 19:01:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:07:14.468 19:01:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:07:14.468 19:01:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:14.468 19:01:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:07:14.469 19:01:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:14.469 19:01:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:14.469 19:01:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:14.469 19:01:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:07:14.469 [2024-07-15 19:01:54.714237] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:14.469 [2024-07-15 19:01:54.714314] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid674774 ] 00:07:14.469 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.727 [2024-07-15 19:01:54.930024] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.727 [2024-07-15 19:01:54.999810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.727 [2024-07-15 19:01:55.059176] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.727 [2024-07-15 19:01:55.075471] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:07:14.727 INFO: Running with entropic power schedule (0xFF, 100). 00:07:14.727 INFO: Seed: 1244240738 00:07:14.727 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:14.727 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:14.727 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:14.727 INFO: A corpus is not provided, starting from an empty corpus 00:07:14.727 #2 INITED exec/s: 0 rss: 64Mb 00:07:14.727 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:14.727 This may also happen if the target rejected all inputs we tried so far 00:07:14.727 [2024-07-15 19:01:55.140958] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.727 [2024-07-15 19:01:55.140991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.727 [2024-07-15 19:01:55.141063] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.727 [2024-07-15 19:01:55.141079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.727 [2024-07-15 19:01:55.141136] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.727 [2024-07-15 19:01:55.141152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.243 NEW_FUNC[1/696]: 0x49a940 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:07:15.243 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:15.243 #7 NEW cov: 11969 ft: 11970 corp: 2/67b lim: 105 exec/s: 0 rss: 72Mb L: 66/66 MS: 5 InsertByte-ChangeBinInt-ChangeBinInt-ShuffleBytes-InsertRepeatedBytes- 00:07:15.243 [2024-07-15 19:01:55.491973] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1241513984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.243 [2024-07-15 19:01:55.492027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.243 [2024-07-15 19:01:55.492097] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.243 [2024-07-15 19:01:55.492118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.243 [2024-07-15 19:01:55.492185] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.243 [2024-07-15 19:01:55.492205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.243 [2024-07-15 19:01:55.492276] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.243 [2024-07-15 19:01:55.492296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.243 #10 NEW cov: 12099 ft: 13002 corp: 3/163b lim: 105 exec/s: 0 rss: 72Mb L: 96/96 MS: 3 InsertByte-EraseBytes-InsertRepeatedBytes- 00:07:15.243 [2024-07-15 19:01:55.541770] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.243 [2024-07-15 19:01:55.541799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.243 [2024-07-15 19:01:55.541836] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:8796093022208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.243 [2024-07-15 19:01:55.541852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.243 [2024-07-15 19:01:55.541911] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.243 [2024-07-15 19:01:55.541927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.243 #11 NEW cov: 12105 ft: 13332 corp: 4/229b lim: 105 exec/s: 0 rss: 72Mb L: 66/96 MS: 1 ChangeBit- 00:07:15.243 [2024-07-15 19:01:55.591915] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.243 [2024-07-15 19:01:55.591943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.243 [2024-07-15 19:01:55.591990] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:8796093022208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.243 [2024-07-15 19:01:55.592006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.243 [2024-07-15 19:01:55.592065] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.243 [2024-07-15 19:01:55.592081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.243 #12 NEW cov: 12190 ft: 13598 corp: 5/295b lim: 105 exec/s: 0 rss: 72Mb L: 66/96 MS: 1 ShuffleBytes- 00:07:15.243 [2024-07-15 19:01:55.642089] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.243 [2024-07-15 19:01:55.642116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.243 [2024-07-15 19:01:55.642168] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:8796093022208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.243 [2024-07-15 19:01:55.642184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.243 [2024-07-15 19:01:55.642251] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:272678883688448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.243 [2024-07-15 19:01:55.642269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.501 #13 NEW cov: 12190 ft: 13643 corp: 6/361b lim: 105 exec/s: 0 rss: 72Mb L: 66/96 MS: 1 ChangeBinInt- 00:07:15.501 [2024-07-15 19:01:55.692194] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:29696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.501 [2024-07-15 19:01:55.692229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.501 [2024-07-15 19:01:55.692267] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:34359738368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.501 [2024-07-15 19:01:55.692283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.501 [2024-07-15 19:01:55.692341] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.501 [2024-07-15 19:01:55.692356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.501 #14 NEW cov: 12190 ft: 13798 corp: 7/428b lim: 105 exec/s: 0 rss: 72Mb L: 67/96 MS: 1 InsertByte- 00:07:15.501 [2024-07-15 19:01:55.732064] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:671744000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.501 [2024-07-15 19:01:55.732093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.501 #19 NEW cov: 12190 ft: 14389 corp: 8/459b lim: 105 exec/s: 0 rss: 72Mb L: 31/96 MS: 5 ChangeBit-ChangeBit-InsertByte-ChangeByte-CrossOver- 00:07:15.501 [2024-07-15 19:01:55.772593] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.501 [2024-07-15 19:01:55.772622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.501 [2024-07-15 19:01:55.772665] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.502 [2024-07-15 19:01:55.772681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.502 [2024-07-15 19:01:55.772738] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.502 [2024-07-15 19:01:55.772752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.502 [2024-07-15 19:01:55.772809] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.502 [2024-07-15 19:01:55.772826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.502 #20 NEW cov: 12190 ft: 14451 corp: 9/543b lim: 105 exec/s: 0 rss: 72Mb L: 84/96 MS: 1 InsertRepeatedBytes- 00:07:15.502 [2024-07-15 19:01:55.812586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:29696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.502 [2024-07-15 19:01:55.812615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.502 [2024-07-15 19:01:55.812678] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.502 [2024-07-15 19:01:55.812694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.502 [2024-07-15 19:01:55.812754] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.502 [2024-07-15 19:01:55.812770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.502 #21 NEW cov: 12190 ft: 14490 corp: 10/626b lim: 105 exec/s: 0 rss: 72Mb L: 83/96 MS: 1 CrossOver- 00:07:15.502 [2024-07-15 19:01:55.862798] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.502 [2024-07-15 19:01:55.862827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.502 [2024-07-15 19:01:55.862869] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.502 [2024-07-15 19:01:55.862886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.502 [2024-07-15 19:01:55.862941] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:34359738368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.502 [2024-07-15 19:01:55.862956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.502 [2024-07-15 19:01:55.863010] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.502 [2024-07-15 19:01:55.863025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.502 #22 NEW cov: 12190 ft: 14637 corp: 11/710b lim: 105 exec/s: 0 rss: 72Mb L: 84/96 MS: 1 ChangeBit- 00:07:15.502 [2024-07-15 19:01:55.912688] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.502 [2024-07-15 19:01:55.912716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.502 [2024-07-15 19:01:55.912771] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.502 [2024-07-15 19:01:55.912787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.760 #23 NEW cov: 12190 ft: 14938 corp: 12/771b lim: 105 exec/s: 0 rss: 72Mb L: 61/96 MS: 1 EraseBytes- 00:07:15.761 [2024-07-15 19:01:55.952997] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1241513984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.761 [2024-07-15 19:01:55.953026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.761 [2024-07-15 19:01:55.953086] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.761 [2024-07-15 19:01:55.953104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.761 [2024-07-15 19:01:55.953160] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.761 [2024-07-15 19:01:55.953175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.761 [2024-07-15 19:01:55.953239] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.761 [2024-07-15 19:01:55.953257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.761 #24 NEW cov: 12190 ft: 14950 corp: 13/867b lim: 105 exec/s: 0 rss: 72Mb L: 96/96 MS: 1 ShuffleBytes- 00:07:15.761 [2024-07-15 19:01:56.002947] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.761 [2024-07-15 19:01:56.002979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.761 [2024-07-15 19:01:56.003036] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:144115188075855872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.761 [2024-07-15 19:01:56.003052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.761 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:15.761 #25 NEW cov: 12213 ft: 14990 corp: 14/928b lim: 105 exec/s: 0 rss: 73Mb L: 61/96 MS: 1 ChangeBit- 00:07:15.761 [2024-07-15 19:01:56.063199] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:29696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.761 [2024-07-15 19:01:56.063234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.761 [2024-07-15 19:01:56.063272] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:34359738368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.761 [2024-07-15 19:01:56.063288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.761 [2024-07-15 19:01:56.063346] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.761 [2024-07-15 19:01:56.063361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.761 #26 NEW cov: 12213 ft: 15015 corp: 15/995b lim: 105 exec/s: 0 rss: 73Mb L: 67/96 MS: 1 ChangeBinInt- 00:07:15.761 [2024-07-15 19:01:56.103440] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1241513984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.761 [2024-07-15 19:01:56.103468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.761 [2024-07-15 19:01:56.103533] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.761 [2024-07-15 19:01:56.103549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.761 [2024-07-15 19:01:56.103606] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.761 [2024-07-15 19:01:56.103621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.761 [2024-07-15 19:01:56.103677] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.761 [2024-07-15 19:01:56.103694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.761 #27 NEW cov: 12213 ft: 15049 corp: 16/1098b lim: 105 exec/s: 27 rss: 73Mb L: 103/103 MS: 1 InsertRepeatedBytes- 00:07:15.761 [2024-07-15 19:01:56.153612] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1241514496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.761 [2024-07-15 19:01:56.153641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.761 [2024-07-15 19:01:56.153700] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.761 [2024-07-15 19:01:56.153714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.761 [2024-07-15 19:01:56.153771] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.761 [2024-07-15 19:01:56.153790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.761 [2024-07-15 19:01:56.153850] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.761 [2024-07-15 19:01:56.153866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.761 #28 NEW cov: 12213 ft: 15076 corp: 17/1194b lim: 105 exec/s: 28 rss: 73Mb L: 96/103 MS: 1 ChangeBit- 00:07:16.020 [2024-07-15 19:01:56.193609] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.020 [2024-07-15 19:01:56.193637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.020 [2024-07-15 19:01:56.193675] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:8796093022208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.020 [2024-07-15 19:01:56.193692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.020 [2024-07-15 19:01:56.193749] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.020 [2024-07-15 19:01:56.193765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.020 #29 NEW cov: 12213 ft: 15110 corp: 18/1260b lim: 105 exec/s: 29 rss: 73Mb L: 66/103 MS: 1 ChangeByte- 00:07:16.020 [2024-07-15 19:01:56.233583] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.020 [2024-07-15 19:01:56.233612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.020 [2024-07-15 19:01:56.233674] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:144115188075855872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.020 [2024-07-15 19:01:56.233691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.020 #30 NEW cov: 12213 ft: 15129 corp: 19/1321b lim: 105 exec/s: 30 rss: 73Mb L: 61/103 MS: 1 ChangeBit- 00:07:16.020 [2024-07-15 19:01:56.283890] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:29696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.020 [2024-07-15 19:01:56.283920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.020 [2024-07-15 19:01:56.283974] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.020 [2024-07-15 19:01:56.283992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.020 [2024-07-15 19:01:56.284052] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.020 [2024-07-15 19:01:56.284069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.020 #31 NEW cov: 12213 ft: 15137 corp: 20/1404b lim: 105 exec/s: 31 rss: 73Mb L: 83/103 MS: 1 ChangeByte- 00:07:16.020 [2024-07-15 19:01:56.333982] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.020 [2024-07-15 19:01:56.334011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.020 [2024-07-15 19:01:56.334048] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.020 [2024-07-15 19:01:56.334067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.020 [2024-07-15 19:01:56.334127] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.020 [2024-07-15 19:01:56.334159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.020 #32 NEW cov: 12213 ft: 15143 corp: 21/1475b lim: 105 exec/s: 32 rss: 73Mb L: 71/103 MS: 1 EraseBytes- 00:07:16.020 [2024-07-15 19:01:56.374085] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.020 [2024-07-15 19:01:56.374113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.020 [2024-07-15 19:01:56.374156] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:8796093022208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.020 [2024-07-15 19:01:56.374171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.020 [2024-07-15 19:01:56.374229] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.020 [2024-07-15 19:01:56.374243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.020 #33 NEW cov: 12213 ft: 15155 corp: 22/1558b lim: 105 exec/s: 33 rss: 73Mb L: 83/103 MS: 1 CopyPart- 00:07:16.020 [2024-07-15 19:01:56.414181] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.020 [2024-07-15 19:01:56.414208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.020 [2024-07-15 19:01:56.414277] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:8796093022208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.020 [2024-07-15 19:01:56.414293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.020 [2024-07-15 19:01:56.414351] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2089670228644204174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.020 [2024-07-15 19:01:56.414367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.020 #34 NEW cov: 12213 ft: 15167 corp: 23/1632b lim: 105 exec/s: 34 rss: 73Mb L: 74/103 MS: 1 CMP- DE: "\000\000\177\\\014\016\216\035"- 00:07:16.279 [2024-07-15 19:01:56.454343] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:671744000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.279 [2024-07-15 19:01:56.454374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.279 [2024-07-15 19:01:56.454416] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744069414584575 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.279 [2024-07-15 19:01:56.454434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.279 [2024-07-15 19:01:56.454491] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.279 [2024-07-15 19:01:56.454509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.279 #35 NEW cov: 12213 ft: 15225 corp: 24/1701b lim: 105 exec/s: 35 rss: 73Mb L: 69/103 MS: 1 InsertRepeatedBytes- 00:07:16.279 [2024-07-15 19:01:56.504449] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:140033785462784 len:36382 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.279 [2024-07-15 19:01:56.504480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.279 [2024-07-15 19:01:56.504519] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744069414584575 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.279 [2024-07-15 19:01:56.504535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.279 [2024-07-15 19:01:56.504593] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.279 [2024-07-15 19:01:56.504609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.279 #36 NEW cov: 12213 ft: 15277 corp: 25/1770b lim: 105 exec/s: 36 rss: 73Mb L: 69/103 MS: 1 PersAutoDict- DE: "\000\000\177\\\014\016\216\035"- 00:07:16.279 [2024-07-15 19:01:56.554606] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:28991922601197568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.279 [2024-07-15 19:01:56.554635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.279 [2024-07-15 19:01:56.554684] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:8796093022208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.279 [2024-07-15 19:01:56.554701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.279 [2024-07-15 19:01:56.554758] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:272678883688448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.279 [2024-07-15 19:01:56.554773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.279 #37 NEW cov: 12213 ft: 15279 corp: 26/1836b lim: 105 exec/s: 37 rss: 73Mb L: 66/103 MS: 1 ChangeByte- 00:07:16.279 [2024-07-15 19:01:56.604829] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:140033785462784 len:36382 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.279 [2024-07-15 19:01:56.604856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.279 [2024-07-15 19:01:56.604908] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744069414584575 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.279 [2024-07-15 19:01:56.604924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.280 [2024-07-15 19:01:56.604995] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.280 [2024-07-15 19:01:56.605011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.280 [2024-07-15 19:01:56.605068] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744069414649855 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.280 [2024-07-15 19:01:56.605083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.280 #38 NEW cov: 12213 ft: 15328 corp: 27/1940b lim: 105 exec/s: 38 rss: 73Mb L: 104/104 MS: 1 CopyPart- 00:07:16.280 [2024-07-15 19:01:56.654742] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.280 [2024-07-15 19:01:56.654770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.280 [2024-07-15 19:01:56.654824] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:144115188075855872 len:32605 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.280 [2024-07-15 19:01:56.654842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.280 #39 NEW cov: 12213 ft: 15337 corp: 28/2001b lim: 105 exec/s: 39 rss: 73Mb L: 61/104 MS: 1 PersAutoDict- DE: "\000\000\177\\\014\016\216\035"- 00:07:16.280 [2024-07-15 19:01:56.705059] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.280 [2024-07-15 19:01:56.705087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.280 [2024-07-15 19:01:56.705124] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:8796093022208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.280 [2024-07-15 19:01:56.705139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.280 [2024-07-15 19:01:56.705197] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.280 [2024-07-15 19:01:56.705213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.539 #40 NEW cov: 12213 ft: 15365 corp: 29/2067b lim: 105 exec/s: 40 rss: 74Mb L: 66/104 MS: 1 ChangeByte- 00:07:16.539 [2024-07-15 19:01:56.755139] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.539 [2024-07-15 19:01:56.755166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.539 [2024-07-15 19:01:56.755235] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.539 [2024-07-15 19:01:56.755252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.539 [2024-07-15 19:01:56.755310] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.539 [2024-07-15 19:01:56.755325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.539 #41 NEW cov: 12213 ft: 15377 corp: 30/2138b lim: 105 exec/s: 41 rss: 74Mb L: 71/104 MS: 1 CopyPart- 00:07:16.539 [2024-07-15 19:01:56.795055] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:671744000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.539 [2024-07-15 19:01:56.795082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.539 #42 NEW cov: 12213 ft: 15466 corp: 31/2169b lim: 105 exec/s: 42 rss: 74Mb L: 31/104 MS: 1 ChangeBinInt- 00:07:16.539 [2024-07-15 19:01:56.835079] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:671744000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.539 [2024-07-15 19:01:56.835107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.539 #43 NEW cov: 12213 ft: 15531 corp: 32/2200b lim: 105 exec/s: 43 rss: 74Mb L: 31/104 MS: 1 ChangeByte- 00:07:16.539 [2024-07-15 19:01:56.875491] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:29696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.539 [2024-07-15 19:01:56.875517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.539 [2024-07-15 19:01:56.875566] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.539 [2024-07-15 19:01:56.875582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.539 [2024-07-15 19:01:56.875654] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.539 [2024-07-15 19:01:56.875671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.539 #44 NEW cov: 12213 ft: 15548 corp: 33/2283b lim: 105 exec/s: 44 rss: 74Mb L: 83/104 MS: 1 ChangeBit- 00:07:16.539 [2024-07-15 19:01:56.925698] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1243611648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.539 [2024-07-15 19:01:56.925725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.539 [2024-07-15 19:01:56.925797] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.539 [2024-07-15 19:01:56.925813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.539 [2024-07-15 19:01:56.925869] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.539 [2024-07-15 19:01:56.925885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.539 [2024-07-15 19:01:56.925940] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.539 [2024-07-15 19:01:56.925955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.539 #45 NEW cov: 12213 ft: 15550 corp: 34/2379b lim: 105 exec/s: 45 rss: 74Mb L: 96/104 MS: 1 ChangeBit- 00:07:16.798 [2024-07-15 19:01:56.975832] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:140033113718784 len:36382 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.798 [2024-07-15 19:01:56.975860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.798 [2024-07-15 19:01:56.975899] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.798 [2024-07-15 19:01:56.975915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.798 [2024-07-15 19:01:56.975972] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.798 [2024-07-15 19:01:56.975988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.798 #46 NEW cov: 12213 ft: 15616 corp: 35/2453b lim: 105 exec/s: 46 rss: 74Mb L: 74/104 MS: 1 PersAutoDict- DE: "\000\000\177\\\014\016\216\035"- 00:07:16.798 [2024-07-15 19:01:57.025931] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.798 [2024-07-15 19:01:57.025958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.798 [2024-07-15 19:01:57.026011] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.798 [2024-07-15 19:01:57.026027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.798 [2024-07-15 19:01:57.026084] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.798 [2024-07-15 19:01:57.026098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.798 #47 NEW cov: 12213 ft: 15632 corp: 36/2524b lim: 105 exec/s: 47 rss: 74Mb L: 71/104 MS: 1 ChangeByte- 00:07:16.798 [2024-07-15 19:01:57.076233] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.798 [2024-07-15 19:01:57.076260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.798 [2024-07-15 19:01:57.076336] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.798 [2024-07-15 19:01:57.076352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.798 [2024-07-15 19:01:57.076407] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.798 [2024-07-15 19:01:57.076422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.798 [2024-07-15 19:01:57.076479] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.798 [2024-07-15 19:01:57.076493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.798 #48 NEW cov: 12213 ft: 15633 corp: 37/2609b lim: 105 exec/s: 48 rss: 74Mb L: 85/104 MS: 1 InsertByte- 00:07:16.798 [2024-07-15 19:01:57.116015] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:671744000 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.798 [2024-07-15 19:01:57.116043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.798 [2024-07-15 19:01:57.116115] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.798 [2024-07-15 19:01:57.116132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.798 #49 NEW cov: 12213 ft: 15650 corp: 38/2670b lim: 105 exec/s: 24 rss: 74Mb L: 61/104 MS: 1 CrossOver- 00:07:16.798 #49 DONE cov: 12213 ft: 15650 corp: 38/2670b lim: 105 exec/s: 24 rss: 74Mb 00:07:16.798 ###### Recommended dictionary. ###### 00:07:16.798 "\000\000\177\\\014\016\216\035" # Uses: 3 00:07:16.798 ###### End of recommended dictionary. ###### 00:07:16.798 Done 49 runs in 2 second(s) 00:07:17.058 19:01:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:07:17.058 19:01:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:17.058 19:01:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:17.058 19:01:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:07:17.058 19:01:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:07:17.058 19:01:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:17.058 19:01:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:17.058 19:01:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:17.058 19:01:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:07:17.058 19:01:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:17.058 19:01:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:17.058 19:01:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:07:17.058 19:01:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:07:17.058 19:01:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:17.058 19:01:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:07:17.058 19:01:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:17.058 19:01:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:17.058 19:01:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:17.058 19:01:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:07:17.058 [2024-07-15 19:01:57.333251] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:17.058 [2024-07-15 19:01:57.333337] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid675065 ] 00:07:17.058 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.316 [2024-07-15 19:01:57.615287] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.316 [2024-07-15 19:01:57.695261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.574 [2024-07-15 19:01:57.754613] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.574 [2024-07-15 19:01:57.770912] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:07:17.574 INFO: Running with entropic power schedule (0xFF, 100). 00:07:17.574 INFO: Seed: 3937243987 00:07:17.574 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:17.574 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:17.574 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:17.574 INFO: A corpus is not provided, starting from an empty corpus 00:07:17.574 #2 INITED exec/s: 0 rss: 65Mb 00:07:17.574 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:17.574 This may also happen if the target rejected all inputs we tried so far 00:07:17.574 [2024-07-15 19:01:57.829916] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744072451260415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.574 [2024-07-15 19:01:57.829950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.574 [2024-07-15 19:01:57.829988] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.574 [2024-07-15 19:01:57.830005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.574 [2024-07-15 19:01:57.830057] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.574 [2024-07-15 19:01:57.830071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.574 [2024-07-15 19:01:57.830123] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.574 [2024-07-15 19:01:57.830138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:17.833 NEW_FUNC[1/697]: 0x49dcc0 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:07:17.833 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:17.833 #11 NEW cov: 11990 ft: 11991 corp: 2/109b lim: 120 exec/s: 0 rss: 71Mb L: 108/108 MS: 4 ChangeBit-InsertByte-EraseBytes-InsertRepeatedBytes- 00:07:17.833 [2024-07-15 19:01:58.181049] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744072451260415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.833 [2024-07-15 19:01:58.181121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.833 [2024-07-15 19:01:58.181203] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.833 [2024-07-15 19:01:58.181239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.833 [2024-07-15 19:01:58.181320] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.833 [2024-07-15 19:01:58.181348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.833 [2024-07-15 19:01:58.181428] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.833 [2024-07-15 19:01:58.181456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:17.833 #17 NEW cov: 12120 ft: 12644 corp: 3/217b lim: 120 exec/s: 0 rss: 72Mb L: 108/108 MS: 1 ShuffleBytes- 00:07:17.833 [2024-07-15 19:01:58.240890] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073102269789 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.833 [2024-07-15 19:01:58.240919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.833 [2024-07-15 19:01:58.240977] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.833 [2024-07-15 19:01:58.240993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.833 [2024-07-15 19:01:58.241045] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.833 [2024-07-15 19:01:58.241061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.833 [2024-07-15 19:01:58.241116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.833 [2024-07-15 19:01:58.241132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.094 #20 NEW cov: 12126 ft: 12911 corp: 4/329b lim: 120 exec/s: 0 rss: 72Mb L: 112/112 MS: 3 ChangeByte-ChangeByte-InsertRepeatedBytes- 00:07:18.094 [2024-07-15 19:01:58.280998] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744072451260415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.095 [2024-07-15 19:01:58.281025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.095 [2024-07-15 19:01:58.281069] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.095 [2024-07-15 19:01:58.281085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.095 [2024-07-15 19:01:58.281139] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.095 [2024-07-15 19:01:58.281154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.095 [2024-07-15 19:01:58.281207] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.095 [2024-07-15 19:01:58.281231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.095 #31 NEW cov: 12211 ft: 13179 corp: 5/438b lim: 120 exec/s: 0 rss: 72Mb L: 109/112 MS: 1 CrossOver- 00:07:18.095 [2024-07-15 19:01:58.321152] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073102269789 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.095 [2024-07-15 19:01:58.321180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.095 [2024-07-15 19:01:58.321244] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.095 [2024-07-15 19:01:58.321261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.095 [2024-07-15 19:01:58.321325] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.095 [2024-07-15 19:01:58.321339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.095 [2024-07-15 19:01:58.321393] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.095 [2024-07-15 19:01:58.321407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.095 #37 NEW cov: 12211 ft: 13324 corp: 6/550b lim: 120 exec/s: 0 rss: 72Mb L: 112/112 MS: 1 ChangeBit- 00:07:18.095 [2024-07-15 19:01:58.370941] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744072451260415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.095 [2024-07-15 19:01:58.370968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.095 [2024-07-15 19:01:58.371044] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.095 [2024-07-15 19:01:58.371061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.095 #38 NEW cov: 12211 ft: 13776 corp: 7/610b lim: 120 exec/s: 0 rss: 72Mb L: 60/112 MS: 1 EraseBytes- 00:07:18.095 [2024-07-15 19:01:58.411044] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073102269789 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.095 [2024-07-15 19:01:58.411072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.095 [2024-07-15 19:01:58.411143] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.095 [2024-07-15 19:01:58.411157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.095 #39 NEW cov: 12211 ft: 13882 corp: 8/670b lim: 120 exec/s: 0 rss: 73Mb L: 60/112 MS: 1 CrossOver- 00:07:18.095 [2024-07-15 19:01:58.461057] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073102269789 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.095 [2024-07-15 19:01:58.461083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.095 #40 NEW cov: 12211 ft: 14721 corp: 9/705b lim: 120 exec/s: 0 rss: 73Mb L: 35/112 MS: 1 EraseBytes- 00:07:18.095 [2024-07-15 19:01:58.511667] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744072451260415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.095 [2024-07-15 19:01:58.511696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.095 [2024-07-15 19:01:58.511739] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744069414649855 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.095 [2024-07-15 19:01:58.511755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.095 [2024-07-15 19:01:58.511810] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.095 [2024-07-15 19:01:58.511824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.095 [2024-07-15 19:01:58.511879] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.095 [2024-07-15 19:01:58.511895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.384 #41 NEW cov: 12211 ft: 14762 corp: 10/814b lim: 120 exec/s: 0 rss: 73Mb L: 109/112 MS: 1 ChangeBinInt- 00:07:18.384 [2024-07-15 19:01:58.561769] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073102269789 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.384 [2024-07-15 19:01:58.561799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.384 [2024-07-15 19:01:58.561841] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.384 [2024-07-15 19:01:58.561855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.384 [2024-07-15 19:01:58.561910] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.384 [2024-07-15 19:01:58.561926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.384 [2024-07-15 19:01:58.561979] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.384 [2024-07-15 19:01:58.561995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.384 #42 NEW cov: 12211 ft: 14812 corp: 11/919b lim: 120 exec/s: 0 rss: 73Mb L: 105/112 MS: 1 EraseBytes- 00:07:18.384 [2024-07-15 19:01:58.601443] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073102269789 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.384 [2024-07-15 19:01:58.601473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.384 #43 NEW cov: 12211 ft: 14906 corp: 12/951b lim: 120 exec/s: 0 rss: 73Mb L: 32/112 MS: 1 EraseBytes- 00:07:18.384 [2024-07-15 19:01:58.651711] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744072451260415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.384 [2024-07-15 19:01:58.651739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.384 [2024-07-15 19:01:58.651810] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.384 [2024-07-15 19:01:58.651826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.384 #44 NEW cov: 12211 ft: 14940 corp: 13/1011b lim: 120 exec/s: 0 rss: 73Mb L: 60/112 MS: 1 CrossOver- 00:07:18.384 [2024-07-15 19:01:58.702165] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073102269789 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.384 [2024-07-15 19:01:58.702196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.384 [2024-07-15 19:01:58.702256] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.384 [2024-07-15 19:01:58.702274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.384 [2024-07-15 19:01:58.702330] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.384 [2024-07-15 19:01:58.702347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.384 [2024-07-15 19:01:58.702404] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.384 [2024-07-15 19:01:58.702420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.384 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:18.384 #45 NEW cov: 12234 ft: 14987 corp: 14/1124b lim: 120 exec/s: 0 rss: 73Mb L: 113/113 MS: 1 CopyPart- 00:07:18.384 [2024-07-15 19:01:58.742292] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073102269789 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.384 [2024-07-15 19:01:58.742321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.384 [2024-07-15 19:01:58.742367] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.384 [2024-07-15 19:01:58.742384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.384 [2024-07-15 19:01:58.742439] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.384 [2024-07-15 19:01:58.742453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.384 [2024-07-15 19:01:58.742509] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.384 [2024-07-15 19:01:58.742523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.384 #46 NEW cov: 12234 ft: 15074 corp: 15/1237b lim: 120 exec/s: 0 rss: 73Mb L: 113/113 MS: 1 InsertByte- 00:07:18.384 [2024-07-15 19:01:58.782080] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073102269789 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.384 [2024-07-15 19:01:58.782108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.384 [2024-07-15 19:01:58.782163] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6727636447603285341 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.384 [2024-07-15 19:01:58.782179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.384 #47 NEW cov: 12234 ft: 15118 corp: 16/1285b lim: 120 exec/s: 0 rss: 73Mb L: 48/113 MS: 1 CrossOver- 00:07:18.685 [2024-07-15 19:01:58.822029] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073102269789 len:23838 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.685 [2024-07-15 19:01:58.822060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.685 #48 NEW cov: 12234 ft: 15142 corp: 17/1317b lim: 120 exec/s: 48 rss: 73Mb L: 32/113 MS: 1 ChangeBit- 00:07:18.685 [2024-07-15 19:01:58.872481] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073098992989 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.685 [2024-07-15 19:01:58.872510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.685 [2024-07-15 19:01:58.872546] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6727636073941130589 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.685 [2024-07-15 19:01:58.872562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.685 [2024-07-15 19:01:58.872616] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.685 [2024-07-15 19:01:58.872633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.685 #49 NEW cov: 12234 ft: 15441 corp: 18/1397b lim: 120 exec/s: 49 rss: 73Mb L: 80/113 MS: 1 CrossOver- 00:07:18.685 [2024-07-15 19:01:58.912725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744072451260415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.685 [2024-07-15 19:01:58.912753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.685 [2024-07-15 19:01:58.912798] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.685 [2024-07-15 19:01:58.912814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.685 [2024-07-15 19:01:58.912868] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.685 [2024-07-15 19:01:58.912884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.685 [2024-07-15 19:01:58.912936] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.685 [2024-07-15 19:01:58.912951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.685 #50 NEW cov: 12234 ft: 15529 corp: 19/1513b lim: 120 exec/s: 50 rss: 73Mb L: 116/116 MS: 1 CMP- DE: "\000\0238i?~W\310"- 00:07:18.685 [2024-07-15 19:01:58.952835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073102269789 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.685 [2024-07-15 19:01:58.952861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.685 [2024-07-15 19:01:58.952908] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.685 [2024-07-15 19:01:58.952924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.685 [2024-07-15 19:01:58.952977] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6727636073941116765 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.685 [2024-07-15 19:01:58.952991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.685 [2024-07-15 19:01:58.953045] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.685 [2024-07-15 19:01:58.953061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.685 #51 NEW cov: 12234 ft: 15574 corp: 20/1625b lim: 120 exec/s: 51 rss: 73Mb L: 112/116 MS: 1 ChangeByte- 00:07:18.685 [2024-07-15 19:01:58.992498] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073102269789 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.685 [2024-07-15 19:01:58.992526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.685 #52 NEW cov: 12234 ft: 15598 corp: 21/1660b lim: 120 exec/s: 52 rss: 73Mb L: 35/116 MS: 1 ChangeByte- 00:07:18.685 [2024-07-15 19:01:59.033062] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073102269789 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.685 [2024-07-15 19:01:59.033088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.685 [2024-07-15 19:01:59.033135] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.685 [2024-07-15 19:01:59.033151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.685 [2024-07-15 19:01:59.033203] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.685 [2024-07-15 19:01:59.033222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.685 [2024-07-15 19:01:59.033276] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.685 [2024-07-15 19:01:59.033290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.685 #53 NEW cov: 12234 ft: 15630 corp: 22/1765b lim: 120 exec/s: 53 rss: 73Mb L: 105/116 MS: 1 ChangeBit- 00:07:18.685 [2024-07-15 19:01:59.083208] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073102269789 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.685 [2024-07-15 19:01:59.083239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.685 [2024-07-15 19:01:59.083293] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.685 [2024-07-15 19:01:59.083310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.685 [2024-07-15 19:01:59.083362] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.685 [2024-07-15 19:01:59.083378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.685 [2024-07-15 19:01:59.083431] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6727636072565398877 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.685 [2024-07-15 19:01:59.083447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.952 #54 NEW cov: 12234 ft: 15638 corp: 23/1878b lim: 120 exec/s: 54 rss: 73Mb L: 113/116 MS: 1 ChangeByte- 00:07:18.952 [2024-07-15 19:01:59.133347] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744072451260415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.952 [2024-07-15 19:01:59.133385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.952 [2024-07-15 19:01:59.133447] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744069414649855 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.952 [2024-07-15 19:01:59.133467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.952 [2024-07-15 19:01:59.133523] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.952 [2024-07-15 19:01:59.133537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.952 [2024-07-15 19:01:59.133592] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.952 [2024-07-15 19:01:59.133608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.952 #55 NEW cov: 12234 ft: 15690 corp: 24/1987b lim: 120 exec/s: 55 rss: 73Mb L: 109/116 MS: 1 CopyPart- 00:07:18.952 [2024-07-15 19:01:59.183330] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073102269789 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.952 [2024-07-15 19:01:59.183358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.952 [2024-07-15 19:01:59.183411] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.952 [2024-07-15 19:01:59.183427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.952 [2024-07-15 19:01:59.183483] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.952 [2024-07-15 19:01:59.183499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.952 #56 NEW cov: 12234 ft: 15709 corp: 25/2080b lim: 120 exec/s: 56 rss: 73Mb L: 93/116 MS: 1 CrossOver- 00:07:18.952 [2024-07-15 19:01:59.223282] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744072451260415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.952 [2024-07-15 19:01:59.223309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.952 [2024-07-15 19:01:59.223357] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.952 [2024-07-15 19:01:59.223373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.952 #57 NEW cov: 12234 ft: 15755 corp: 26/2140b lim: 120 exec/s: 57 rss: 73Mb L: 60/116 MS: 1 CrossOver- 00:07:18.952 [2024-07-15 19:01:59.263706] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073102269789 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.952 [2024-07-15 19:01:59.263732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.952 [2024-07-15 19:01:59.263781] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.952 [2024-07-15 19:01:59.263797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.952 [2024-07-15 19:01:59.263866] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.952 [2024-07-15 19:01:59.263882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.952 [2024-07-15 19:01:59.263936] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.952 [2024-07-15 19:01:59.263956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.952 #58 NEW cov: 12234 ft: 15767 corp: 27/2252b lim: 120 exec/s: 58 rss: 73Mb L: 112/116 MS: 1 PersAutoDict- DE: "\000\0238i?~W\310"- 00:07:18.952 [2024-07-15 19:01:59.303797] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744072451260415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.952 [2024-07-15 19:01:59.303823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.952 [2024-07-15 19:01:59.303885] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744069414649855 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.952 [2024-07-15 19:01:59.303902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.952 [2024-07-15 19:01:59.303957] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.952 [2024-07-15 19:01:59.303973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.952 [2024-07-15 19:01:59.304028] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.952 [2024-07-15 19:01:59.304044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.952 #59 NEW cov: 12234 ft: 15779 corp: 28/2361b lim: 120 exec/s: 59 rss: 74Mb L: 109/116 MS: 1 ShuffleBytes- 00:07:18.952 [2024-07-15 19:01:59.353953] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744072451260415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.953 [2024-07-15 19:01:59.353981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.953 [2024-07-15 19:01:59.354044] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744069414649855 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.953 [2024-07-15 19:01:59.354061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.953 [2024-07-15 19:01:59.354117] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.953 [2024-07-15 19:01:59.354133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.953 [2024-07-15 19:01:59.354187] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.953 [2024-07-15 19:01:59.354203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.211 #60 NEW cov: 12234 ft: 15784 corp: 29/2470b lim: 120 exec/s: 60 rss: 74Mb L: 109/116 MS: 1 ShuffleBytes- 00:07:19.211 [2024-07-15 19:01:59.404101] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6738332122217274717 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.211 [2024-07-15 19:01:59.404130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.211 [2024-07-15 19:01:59.404196] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.211 [2024-07-15 19:01:59.404213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.211 [2024-07-15 19:01:59.404277] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.211 [2024-07-15 19:01:59.404292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.211 [2024-07-15 19:01:59.404347] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.212 [2024-07-15 19:01:59.404363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.212 #61 NEW cov: 12234 ft: 15817 corp: 30/2583b lim: 120 exec/s: 61 rss: 74Mb L: 113/116 MS: 1 InsertByte- 00:07:19.212 [2024-07-15 19:01:59.444139] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744072451260415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.212 [2024-07-15 19:01:59.444167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.212 [2024-07-15 19:01:59.444223] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.212 [2024-07-15 19:01:59.444240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.212 [2024-07-15 19:01:59.444296] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6727635799063223645 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.212 [2024-07-15 19:01:59.444311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.212 #62 NEW cov: 12234 ft: 15827 corp: 31/2659b lim: 120 exec/s: 62 rss: 74Mb L: 76/116 MS: 1 CrossOver- 00:07:19.212 [2024-07-15 19:01:59.494362] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744072451260415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.212 [2024-07-15 19:01:59.494390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.212 [2024-07-15 19:01:59.494437] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.212 [2024-07-15 19:01:59.494453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.212 [2024-07-15 19:01:59.494506] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.212 [2024-07-15 19:01:59.494520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.212 [2024-07-15 19:01:59.494576] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.212 [2024-07-15 19:01:59.494590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.212 #63 NEW cov: 12234 ft: 15838 corp: 32/2767b lim: 120 exec/s: 63 rss: 74Mb L: 108/116 MS: 1 CMP- DE: "\377\377\377\377"- 00:07:19.212 [2024-07-15 19:01:59.534472] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073102269789 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.212 [2024-07-15 19:01:59.534501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.212 [2024-07-15 19:01:59.534561] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6727636073941130589 len:41891 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.212 [2024-07-15 19:01:59.534576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.212 [2024-07-15 19:01:59.534633] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.212 [2024-07-15 19:01:59.534648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.212 [2024-07-15 19:01:59.534704] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.212 [2024-07-15 19:01:59.534719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.212 #64 NEW cov: 12234 ft: 15850 corp: 33/2880b lim: 120 exec/s: 64 rss: 74Mb L: 113/116 MS: 1 ChangeBinInt- 00:07:19.212 [2024-07-15 19:01:59.584627] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744072451260415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.212 [2024-07-15 19:01:59.584654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.212 [2024-07-15 19:01:59.584702] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744069414649855 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.212 [2024-07-15 19:01:59.584718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.212 [2024-07-15 19:01:59.584771] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.212 [2024-07-15 19:01:59.584787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.212 [2024-07-15 19:01:59.584840] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.212 [2024-07-15 19:01:59.584856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.212 #65 NEW cov: 12234 ft: 15858 corp: 34/2989b lim: 120 exec/s: 65 rss: 74Mb L: 109/116 MS: 1 ChangeBit- 00:07:19.212 [2024-07-15 19:01:59.634765] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073102269789 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.212 [2024-07-15 19:01:59.634792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.212 [2024-07-15 19:01:59.634853] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.212 [2024-07-15 19:01:59.634869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.212 [2024-07-15 19:01:59.634922] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.212 [2024-07-15 19:01:59.634937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.212 [2024-07-15 19:01:59.634992] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6727636072565398877 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.212 [2024-07-15 19:01:59.635008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.470 #66 NEW cov: 12234 ft: 15865 corp: 35/3102b lim: 120 exec/s: 66 rss: 74Mb L: 113/116 MS: 1 ChangeBinInt- 00:07:19.470 [2024-07-15 19:01:59.684472] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073102269789 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.470 [2024-07-15 19:01:59.684501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.470 #67 NEW cov: 12234 ft: 15882 corp: 36/3145b lim: 120 exec/s: 67 rss: 74Mb L: 43/116 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\000"- 00:07:19.470 [2024-07-15 19:01:59.724998] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073102269789 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.470 [2024-07-15 19:01:59.725026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.470 [2024-07-15 19:01:59.725071] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.470 [2024-07-15 19:01:59.725086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.470 [2024-07-15 19:01:59.725139] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.470 [2024-07-15 19:01:59.725155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.470 [2024-07-15 19:01:59.725212] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.470 [2024-07-15 19:01:59.725233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.471 #68 NEW cov: 12234 ft: 15893 corp: 37/3258b lim: 120 exec/s: 68 rss: 74Mb L: 113/116 MS: 1 CrossOver- 00:07:19.471 [2024-07-15 19:01:59.765151] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744072451260415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.471 [2024-07-15 19:01:59.765179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.471 [2024-07-15 19:01:59.765230] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744069414649855 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.471 [2024-07-15 19:01:59.765263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.471 [2024-07-15 19:01:59.765318] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.471 [2024-07-15 19:01:59.765334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.471 [2024-07-15 19:01:59.765388] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.471 [2024-07-15 19:01:59.765403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.471 #69 NEW cov: 12234 ft: 15919 corp: 38/3367b lim: 120 exec/s: 69 rss: 74Mb L: 109/116 MS: 1 CopyPart- 00:07:19.471 [2024-07-15 19:01:59.805096] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073102269789 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.471 [2024-07-15 19:01:59.805122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.471 [2024-07-15 19:01:59.805159] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.471 [2024-07-15 19:01:59.805174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.471 [2024-07-15 19:01:59.805228] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.471 [2024-07-15 19:01:59.805264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.471 #70 NEW cov: 12234 ft: 15922 corp: 39/3457b lim: 120 exec/s: 35 rss: 74Mb L: 90/116 MS: 1 EraseBytes- 00:07:19.471 #70 DONE cov: 12234 ft: 15922 corp: 39/3457b lim: 120 exec/s: 35 rss: 74Mb 00:07:19.471 ###### Recommended dictionary. ###### 00:07:19.471 "\000\0238i?~W\310" # Uses: 1 00:07:19.471 "\377\377\377\377" # Uses: 0 00:07:19.471 "\377\377\377\377\377\377\377\000" # Uses: 0 00:07:19.471 ###### End of recommended dictionary. ###### 00:07:19.471 Done 70 runs in 2 second(s) 00:07:19.730 19:01:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:07:19.730 19:01:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:19.730 19:01:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:19.730 19:01:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:07:19.730 19:01:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:07:19.730 19:01:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:19.730 19:01:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:19.730 19:01:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:19.730 19:01:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:07:19.730 19:01:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:19.730 19:01:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:19.730 19:01:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:07:19.730 19:01:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:07:19.730 19:01:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:19.730 19:01:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:07:19.730 19:01:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:19.730 19:01:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:19.730 19:01:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:19.730 19:01:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:07:19.730 [2024-07-15 19:02:00.021805] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:19.730 [2024-07-15 19:02:00.021875] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid675406 ] 00:07:19.730 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.988 [2024-07-15 19:02:00.225348] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.988 [2024-07-15 19:02:00.298215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.988 [2024-07-15 19:02:00.358342] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.988 [2024-07-15 19:02:00.374649] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:07:19.988 INFO: Running with entropic power schedule (0xFF, 100). 00:07:19.988 INFO: Seed: 2248289305 00:07:20.247 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:20.247 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:20.247 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:20.247 INFO: A corpus is not provided, starting from an empty corpus 00:07:20.247 #2 INITED exec/s: 0 rss: 64Mb 00:07:20.247 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:20.247 This may also happen if the target rejected all inputs we tried so far 00:07:20.247 [2024-07-15 19:02:00.439853] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:20.247 [2024-07-15 19:02:00.439884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.247 [2024-07-15 19:02:00.439955] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:20.247 [2024-07-15 19:02:00.439971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.506 NEW_FUNC[1/695]: 0x4a15b0 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:07:20.506 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:20.506 #4 NEW cov: 11933 ft: 11934 corp: 2/52b lim: 100 exec/s: 0 rss: 72Mb L: 51/51 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:20.506 [2024-07-15 19:02:00.780704] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:20.506 [2024-07-15 19:02:00.780746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.506 [2024-07-15 19:02:00.780817] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:20.506 [2024-07-15 19:02:00.780833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.506 #5 NEW cov: 12063 ft: 12461 corp: 3/103b lim: 100 exec/s: 0 rss: 72Mb L: 51/51 MS: 1 ChangeBit- 00:07:20.506 [2024-07-15 19:02:00.830712] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:20.506 [2024-07-15 19:02:00.830741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.506 [2024-07-15 19:02:00.830792] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:20.506 [2024-07-15 19:02:00.830807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.506 #6 NEW cov: 12069 ft: 12689 corp: 4/154b lim: 100 exec/s: 0 rss: 72Mb L: 51/51 MS: 1 ChangeBinInt- 00:07:20.506 [2024-07-15 19:02:00.880847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:20.506 [2024-07-15 19:02:00.880874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.506 [2024-07-15 19:02:00.880942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:20.506 [2024-07-15 19:02:00.880957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.506 #7 NEW cov: 12154 ft: 12954 corp: 5/205b lim: 100 exec/s: 0 rss: 72Mb L: 51/51 MS: 1 CopyPart- 00:07:20.506 [2024-07-15 19:02:00.931206] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:20.506 [2024-07-15 19:02:00.931241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.506 [2024-07-15 19:02:00.931281] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:20.506 [2024-07-15 19:02:00.931297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.506 [2024-07-15 19:02:00.931351] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:20.506 [2024-07-15 19:02:00.931366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.506 [2024-07-15 19:02:00.931421] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:20.506 [2024-07-15 19:02:00.931435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:20.764 #13 NEW cov: 12154 ft: 13357 corp: 6/303b lim: 100 exec/s: 0 rss: 72Mb L: 98/98 MS: 1 InsertRepeatedBytes- 00:07:20.764 [2024-07-15 19:02:00.971068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:20.764 [2024-07-15 19:02:00.971094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.764 [2024-07-15 19:02:00.971151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:20.764 [2024-07-15 19:02:00.971166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.764 #14 NEW cov: 12154 ft: 13538 corp: 7/355b lim: 100 exec/s: 0 rss: 72Mb L: 52/98 MS: 1 InsertByte- 00:07:20.764 [2024-07-15 19:02:01.011175] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:20.764 [2024-07-15 19:02:01.011201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.764 [2024-07-15 19:02:01.011258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:20.764 [2024-07-15 19:02:01.011273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.764 #15 NEW cov: 12154 ft: 13645 corp: 8/407b lim: 100 exec/s: 0 rss: 72Mb L: 52/98 MS: 1 ShuffleBytes- 00:07:20.764 [2024-07-15 19:02:01.061346] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:20.764 [2024-07-15 19:02:01.061371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.764 [2024-07-15 19:02:01.061414] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:20.764 [2024-07-15 19:02:01.061428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.764 #16 NEW cov: 12154 ft: 13683 corp: 9/459b lim: 100 exec/s: 0 rss: 73Mb L: 52/98 MS: 1 ChangeBit- 00:07:20.764 [2024-07-15 19:02:01.111460] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:20.764 [2024-07-15 19:02:01.111486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.764 [2024-07-15 19:02:01.111522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:20.764 [2024-07-15 19:02:01.111536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.764 #17 NEW cov: 12154 ft: 13734 corp: 10/510b lim: 100 exec/s: 0 rss: 73Mb L: 51/98 MS: 1 ShuffleBytes- 00:07:20.764 [2024-07-15 19:02:01.161624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:20.764 [2024-07-15 19:02:01.161649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.764 [2024-07-15 19:02:01.161700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:20.764 [2024-07-15 19:02:01.161715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.764 #23 NEW cov: 12154 ft: 13758 corp: 11/561b lim: 100 exec/s: 0 rss: 73Mb L: 51/98 MS: 1 ChangeBit- 00:07:21.023 [2024-07-15 19:02:01.201625] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:21.023 [2024-07-15 19:02:01.201650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.023 #29 NEW cov: 12154 ft: 14092 corp: 12/596b lim: 100 exec/s: 0 rss: 73Mb L: 35/98 MS: 1 EraseBytes- 00:07:21.023 [2024-07-15 19:02:01.241831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:21.023 [2024-07-15 19:02:01.241856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.023 [2024-07-15 19:02:01.241917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:21.023 [2024-07-15 19:02:01.241932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.023 #30 NEW cov: 12154 ft: 14140 corp: 13/648b lim: 100 exec/s: 0 rss: 73Mb L: 52/98 MS: 1 InsertByte- 00:07:21.023 [2024-07-15 19:02:01.291961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:21.023 [2024-07-15 19:02:01.291987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.023 [2024-07-15 19:02:01.292022] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:21.023 [2024-07-15 19:02:01.292036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.023 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:21.023 #31 NEW cov: 12177 ft: 14189 corp: 14/700b lim: 100 exec/s: 0 rss: 73Mb L: 52/98 MS: 1 CrossOver- 00:07:21.023 [2024-07-15 19:02:01.342091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:21.023 [2024-07-15 19:02:01.342117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.023 [2024-07-15 19:02:01.342162] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:21.023 [2024-07-15 19:02:01.342177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.023 #32 NEW cov: 12177 ft: 14204 corp: 15/752b lim: 100 exec/s: 0 rss: 73Mb L: 52/98 MS: 1 InsertByte- 00:07:21.023 [2024-07-15 19:02:01.382197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:21.023 [2024-07-15 19:02:01.382228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.023 [2024-07-15 19:02:01.382285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:21.023 [2024-07-15 19:02:01.382300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.023 #33 NEW cov: 12177 ft: 14226 corp: 16/803b lim: 100 exec/s: 0 rss: 73Mb L: 51/98 MS: 1 ChangeBit- 00:07:21.023 [2024-07-15 19:02:01.422251] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:21.023 [2024-07-15 19:02:01.422278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.023 #34 NEW cov: 12177 ft: 14240 corp: 17/838b lim: 100 exec/s: 34 rss: 73Mb L: 35/98 MS: 1 EraseBytes- 00:07:21.282 [2024-07-15 19:02:01.462646] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:21.282 [2024-07-15 19:02:01.462672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.282 [2024-07-15 19:02:01.462733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:21.282 [2024-07-15 19:02:01.462748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.282 [2024-07-15 19:02:01.462798] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:21.282 [2024-07-15 19:02:01.462812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.282 [2024-07-15 19:02:01.462867] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:21.282 [2024-07-15 19:02:01.462881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.282 #35 NEW cov: 12177 ft: 14247 corp: 18/937b lim: 100 exec/s: 35 rss: 73Mb L: 99/99 MS: 1 InsertByte- 00:07:21.282 [2024-07-15 19:02:01.512554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:21.282 [2024-07-15 19:02:01.512581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.282 [2024-07-15 19:02:01.512615] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:21.282 [2024-07-15 19:02:01.512628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.282 #36 NEW cov: 12177 ft: 14264 corp: 19/988b lim: 100 exec/s: 36 rss: 73Mb L: 51/99 MS: 1 CrossOver- 00:07:21.282 [2024-07-15 19:02:01.552586] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:21.282 [2024-07-15 19:02:01.552613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.282 #37 NEW cov: 12177 ft: 14288 corp: 20/1024b lim: 100 exec/s: 37 rss: 73Mb L: 36/99 MS: 1 InsertByte- 00:07:21.282 [2024-07-15 19:02:01.602749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:21.282 [2024-07-15 19:02:01.602777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.282 #38 NEW cov: 12177 ft: 14341 corp: 21/1060b lim: 100 exec/s: 38 rss: 73Mb L: 36/99 MS: 1 ChangeByte- 00:07:21.282 [2024-07-15 19:02:01.653073] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:21.282 [2024-07-15 19:02:01.653100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.282 [2024-07-15 19:02:01.653140] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:21.282 [2024-07-15 19:02:01.653156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.282 [2024-07-15 19:02:01.653205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:21.282 [2024-07-15 19:02:01.653223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.282 #39 NEW cov: 12177 ft: 14573 corp: 22/1120b lim: 100 exec/s: 39 rss: 73Mb L: 60/99 MS: 1 InsertRepeatedBytes- 00:07:21.282 [2024-07-15 19:02:01.693104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:21.282 [2024-07-15 19:02:01.693130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.282 [2024-07-15 19:02:01.693181] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:21.282 [2024-07-15 19:02:01.693196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.540 #40 NEW cov: 12177 ft: 14589 corp: 23/1173b lim: 100 exec/s: 40 rss: 73Mb L: 53/99 MS: 1 InsertByte- 00:07:21.540 [2024-07-15 19:02:01.733201] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:21.540 [2024-07-15 19:02:01.733233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.540 [2024-07-15 19:02:01.733296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:21.540 [2024-07-15 19:02:01.733311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.540 #41 NEW cov: 12177 ft: 14624 corp: 24/1224b lim: 100 exec/s: 41 rss: 73Mb L: 51/99 MS: 1 ChangeByte- 00:07:21.540 [2024-07-15 19:02:01.773300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:21.540 [2024-07-15 19:02:01.773326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.540 [2024-07-15 19:02:01.773392] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:21.540 [2024-07-15 19:02:01.773408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.540 #42 NEW cov: 12177 ft: 14632 corp: 25/1276b lim: 100 exec/s: 42 rss: 73Mb L: 52/99 MS: 1 ChangeBit- 00:07:21.540 [2024-07-15 19:02:01.823455] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:21.540 [2024-07-15 19:02:01.823480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.540 [2024-07-15 19:02:01.823542] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:21.540 [2024-07-15 19:02:01.823557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.540 #43 NEW cov: 12177 ft: 14644 corp: 26/1318b lim: 100 exec/s: 43 rss: 73Mb L: 42/99 MS: 1 EraseBytes- 00:07:21.540 [2024-07-15 19:02:01.863555] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:21.540 [2024-07-15 19:02:01.863581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.540 [2024-07-15 19:02:01.863634] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:21.540 [2024-07-15 19:02:01.863648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.540 #44 NEW cov: 12177 ft: 14658 corp: 27/1370b lim: 100 exec/s: 44 rss: 73Mb L: 52/99 MS: 1 ChangeBinInt- 00:07:21.540 [2024-07-15 19:02:01.903665] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:21.540 [2024-07-15 19:02:01.903691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.540 [2024-07-15 19:02:01.903757] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:21.540 [2024-07-15 19:02:01.903771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.540 #45 NEW cov: 12177 ft: 14732 corp: 28/1421b lim: 100 exec/s: 45 rss: 73Mb L: 51/99 MS: 1 CopyPart- 00:07:21.540 [2024-07-15 19:02:01.953828] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:21.540 [2024-07-15 19:02:01.953853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.540 [2024-07-15 19:02:01.953890] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:21.540 [2024-07-15 19:02:01.953904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.798 #46 NEW cov: 12177 ft: 14764 corp: 29/1472b lim: 100 exec/s: 46 rss: 73Mb L: 51/99 MS: 1 ChangeBinInt- 00:07:21.798 [2024-07-15 19:02:01.994185] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:21.798 [2024-07-15 19:02:01.994211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.798 [2024-07-15 19:02:01.994287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:21.798 [2024-07-15 19:02:01.994301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.798 [2024-07-15 19:02:01.994357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:21.798 [2024-07-15 19:02:01.994372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.798 [2024-07-15 19:02:01.994423] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:21.798 [2024-07-15 19:02:01.994436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.798 #47 NEW cov: 12177 ft: 14775 corp: 30/1570b lim: 100 exec/s: 47 rss: 73Mb L: 98/99 MS: 1 ChangeBinInt- 00:07:21.798 [2024-07-15 19:02:02.034120] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:21.798 [2024-07-15 19:02:02.034145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.798 [2024-07-15 19:02:02.034181] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:21.798 [2024-07-15 19:02:02.034195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.798 #48 NEW cov: 12177 ft: 14779 corp: 31/1619b lim: 100 exec/s: 48 rss: 73Mb L: 49/99 MS: 1 EraseBytes- 00:07:21.798 [2024-07-15 19:02:02.074179] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:21.798 [2024-07-15 19:02:02.074206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.798 [2024-07-15 19:02:02.074255] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:21.798 [2024-07-15 19:02:02.074286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.798 #49 NEW cov: 12177 ft: 14803 corp: 32/1670b lim: 100 exec/s: 49 rss: 73Mb L: 51/99 MS: 1 ShuffleBytes- 00:07:21.798 [2024-07-15 19:02:02.124562] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:21.798 [2024-07-15 19:02:02.124587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.798 [2024-07-15 19:02:02.124649] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:21.798 [2024-07-15 19:02:02.124664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.798 [2024-07-15 19:02:02.124714] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:21.798 [2024-07-15 19:02:02.124729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.798 [2024-07-15 19:02:02.124783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:21.798 [2024-07-15 19:02:02.124797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.798 #50 NEW cov: 12177 ft: 14810 corp: 33/1754b lim: 100 exec/s: 50 rss: 74Mb L: 84/99 MS: 1 CrossOver- 00:07:21.799 [2024-07-15 19:02:02.174465] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:21.799 [2024-07-15 19:02:02.174491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.799 [2024-07-15 19:02:02.174541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:21.799 [2024-07-15 19:02:02.174555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.799 #51 NEW cov: 12177 ft: 14868 corp: 34/1798b lim: 100 exec/s: 51 rss: 74Mb L: 44/99 MS: 1 EraseBytes- 00:07:21.799 [2024-07-15 19:02:02.214686] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:21.799 [2024-07-15 19:02:02.214714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.799 [2024-07-15 19:02:02.214751] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:21.799 [2024-07-15 19:02:02.214766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.799 [2024-07-15 19:02:02.214819] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:21.799 [2024-07-15 19:02:02.214834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.056 #52 NEW cov: 12177 ft: 14906 corp: 35/1867b lim: 100 exec/s: 52 rss: 74Mb L: 69/99 MS: 1 InsertRepeatedBytes- 00:07:22.056 [2024-07-15 19:02:02.254777] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:22.056 [2024-07-15 19:02:02.254802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.056 [2024-07-15 19:02:02.254843] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:22.056 [2024-07-15 19:02:02.254857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.056 [2024-07-15 19:02:02.254910] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:22.056 [2024-07-15 19:02:02.254925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.056 #53 NEW cov: 12177 ft: 14912 corp: 36/1945b lim: 100 exec/s: 53 rss: 74Mb L: 78/99 MS: 1 InsertRepeatedBytes- 00:07:22.056 [2024-07-15 19:02:02.305038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:22.056 [2024-07-15 19:02:02.305065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.056 [2024-07-15 19:02:02.305109] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:22.057 [2024-07-15 19:02:02.305125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.057 [2024-07-15 19:02:02.305176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:22.057 [2024-07-15 19:02:02.305191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.057 [2024-07-15 19:02:02.305247] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:22.057 [2024-07-15 19:02:02.305261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:22.057 #54 NEW cov: 12177 ft: 14916 corp: 37/2044b lim: 100 exec/s: 54 rss: 74Mb L: 99/99 MS: 1 ShuffleBytes- 00:07:22.057 [2024-07-15 19:02:02.354915] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:22.057 [2024-07-15 19:02:02.354941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.057 [2024-07-15 19:02:02.354980] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:22.057 [2024-07-15 19:02:02.354994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.057 #55 NEW cov: 12177 ft: 14928 corp: 38/2090b lim: 100 exec/s: 55 rss: 74Mb L: 46/99 MS: 1 EraseBytes- 00:07:22.057 [2024-07-15 19:02:02.395036] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:22.057 [2024-07-15 19:02:02.395062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.057 [2024-07-15 19:02:02.395108] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:22.057 [2024-07-15 19:02:02.395122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.057 #56 NEW cov: 12177 ft: 14970 corp: 39/2146b lim: 100 exec/s: 28 rss: 74Mb L: 56/99 MS: 1 CopyPart- 00:07:22.057 #56 DONE cov: 12177 ft: 14970 corp: 39/2146b lim: 100 exec/s: 28 rss: 74Mb 00:07:22.057 Done 56 runs in 2 second(s) 00:07:22.315 19:02:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:07:22.315 19:02:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:22.315 19:02:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:22.315 19:02:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:07:22.315 19:02:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:07:22.315 19:02:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:22.315 19:02:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:22.315 19:02:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:22.315 19:02:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:07:22.315 19:02:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:22.315 19:02:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:22.315 19:02:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:07:22.315 19:02:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:07:22.315 19:02:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:22.315 19:02:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:07:22.315 19:02:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:22.315 19:02:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:22.315 19:02:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:22.315 19:02:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:07:22.315 [2024-07-15 19:02:02.599100] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:22.315 [2024-07-15 19:02:02.599185] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid675742 ] 00:07:22.315 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.573 [2024-07-15 19:02:02.810557] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.574 [2024-07-15 19:02:02.882084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.574 [2024-07-15 19:02:02.941776] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.574 [2024-07-15 19:02:02.958091] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:07:22.574 INFO: Running with entropic power schedule (0xFF, 100). 00:07:22.574 INFO: Seed: 536307810 00:07:22.574 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:22.574 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:22.574 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:22.574 INFO: A corpus is not provided, starting from an empty corpus 00:07:22.574 #2 INITED exec/s: 0 rss: 65Mb 00:07:22.574 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:22.574 This may also happen if the target rejected all inputs we tried so far 00:07:22.832 [2024-07-15 19:02:03.035144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:217020518631670531 len:772 00:07:22.832 [2024-07-15 19:02:03.035189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.832 [2024-07-15 19:02:03.035306] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:217020518514230019 len:772 00:07:22.832 [2024-07-15 19:02:03.035334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.832 [2024-07-15 19:02:03.035438] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:217020518514230019 len:772 00:07:22.832 [2024-07-15 19:02:03.035459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.132 NEW_FUNC[1/695]: 0x4a4570 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:07:23.132 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:23.132 #14 NEW cov: 11911 ft: 11912 corp: 2/32b lim: 50 exec/s: 0 rss: 72Mb L: 31/31 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:23.132 [2024-07-15 19:02:03.375855] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:217020518631670531 len:772 00:07:23.132 [2024-07-15 19:02:03.375907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.132 [2024-07-15 19:02:03.376014] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:217020518514230019 len:772 00:07:23.132 [2024-07-15 19:02:03.376041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.132 [2024-07-15 19:02:03.376135] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:217020518514230019 len:772 00:07:23.132 [2024-07-15 19:02:03.376161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.132 #15 NEW cov: 12041 ft: 12491 corp: 3/63b lim: 50 exec/s: 0 rss: 72Mb L: 31/31 MS: 1 ShuffleBytes- 00:07:23.132 [2024-07-15 19:02:03.446025] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:217020518631670531 len:772 00:07:23.132 [2024-07-15 19:02:03.446056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.132 [2024-07-15 19:02:03.446113] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:217020518514230019 len:772 00:07:23.132 [2024-07-15 19:02:03.446134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.132 [2024-07-15 19:02:03.446197] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:217020518514268163 len:772 00:07:23.133 [2024-07-15 19:02:03.446215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.133 #16 NEW cov: 12047 ft: 12711 corp: 4/94b lim: 50 exec/s: 0 rss: 72Mb L: 31/31 MS: 1 ChangeByte- 00:07:23.133 [2024-07-15 19:02:03.495858] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16638239752757634790 len:59111 00:07:23.133 [2024-07-15 19:02:03.495887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.133 #17 NEW cov: 12132 ft: 13291 corp: 5/112b lim: 50 exec/s: 0 rss: 72Mb L: 18/31 MS: 1 InsertRepeatedBytes- 00:07:23.133 [2024-07-15 19:02:03.546169] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:217020518631670531 len:772 00:07:23.133 [2024-07-15 19:02:03.546201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.391 #18 NEW cov: 12132 ft: 13402 corp: 6/131b lim: 50 exec/s: 0 rss: 72Mb L: 19/31 MS: 1 EraseBytes- 00:07:23.391 [2024-07-15 19:02:03.606976] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:217020518631670531 len:772 00:07:23.391 [2024-07-15 19:02:03.607008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.391 [2024-07-15 19:02:03.607098] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:217020518514230019 len:772 00:07:23.391 [2024-07-15 19:02:03.607116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.391 [2024-07-15 19:02:03.607188] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:217020518514294558 len:772 00:07:23.391 [2024-07-15 19:02:03.607207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.391 #19 NEW cov: 12132 ft: 13453 corp: 7/164b lim: 50 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 CMP- DE: "\377\036"- 00:07:23.391 [2024-07-15 19:02:03.657434] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:217020518631670531 len:772 00:07:23.391 [2024-07-15 19:02:03.657467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.391 [2024-07-15 19:02:03.657528] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:217020521014035203 len:772 00:07:23.391 [2024-07-15 19:02:03.657545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.391 [2024-07-15 19:02:03.657625] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:217020518514230019 len:772 00:07:23.391 [2024-07-15 19:02:03.657641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.391 #20 NEW cov: 12132 ft: 13521 corp: 8/201b lim: 50 exec/s: 0 rss: 72Mb L: 37/37 MS: 1 CrossOver- 00:07:23.391 [2024-07-15 19:02:03.717997] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:217020518631670531 len:772 00:07:23.391 [2024-07-15 19:02:03.718027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.391 [2024-07-15 19:02:03.718101] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:217166223484584707 len:34696 00:07:23.391 [2024-07-15 19:02:03.718123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.391 [2024-07-15 19:02:03.718195] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:9728622933743994759 len:772 00:07:23.391 [2024-07-15 19:02:03.718214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.391 [2024-07-15 19:02:03.718309] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:217020518514230019 len:772 00:07:23.391 [2024-07-15 19:02:03.718329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.391 #21 NEW cov: 12132 ft: 13795 corp: 9/247b lim: 50 exec/s: 0 rss: 72Mb L: 46/46 MS: 1 InsertRepeatedBytes- 00:07:23.391 [2024-07-15 19:02:03.778431] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:217020518631670531 len:772 00:07:23.391 [2024-07-15 19:02:03.778465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.391 [2024-07-15 19:02:03.778529] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:217166223484584707 len:34696 00:07:23.391 [2024-07-15 19:02:03.778546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.391 [2024-07-15 19:02:03.778626] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:9728622933743994759 len:772 00:07:23.391 [2024-07-15 19:02:03.778643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.391 [2024-07-15 19:02:03.778730] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:217020518514230019 len:772 00:07:23.391 [2024-07-15 19:02:03.778749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.391 #22 NEW cov: 12132 ft: 13839 corp: 10/293b lim: 50 exec/s: 0 rss: 72Mb L: 46/46 MS: 1 ChangeBinInt- 00:07:23.651 [2024-07-15 19:02:03.838385] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:07:23.651 [2024-07-15 19:02:03.838416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.651 [2024-07-15 19:02:03.838473] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:23.651 [2024-07-15 19:02:03.838493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.651 #27 NEW cov: 12132 ft: 14073 corp: 11/320b lim: 50 exec/s: 0 rss: 72Mb L: 27/46 MS: 5 CrossOver-ChangeBit-InsertByte-EraseBytes-InsertRepeatedBytes- 00:07:23.651 [2024-07-15 19:02:03.888458] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:217020518631670531 len:772 00:07:23.651 [2024-07-15 19:02:03.888487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.651 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:23.651 #28 NEW cov: 12155 ft: 14125 corp: 12/339b lim: 50 exec/s: 0 rss: 72Mb L: 19/46 MS: 1 ChangeByte- 00:07:23.651 [2024-07-15 19:02:03.939438] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:217020518631670531 len:772 00:07:23.651 [2024-07-15 19:02:03.939468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.651 [2024-07-15 19:02:03.939555] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:217161825438073603 len:34696 00:07:23.651 [2024-07-15 19:02:03.939573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.651 [2024-07-15 19:02:03.939662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:9728622933743994759 len:772 00:07:23.651 [2024-07-15 19:02:03.939686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.651 [2024-07-15 19:02:03.939782] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:217020518514230019 len:772 00:07:23.651 [2024-07-15 19:02:03.939800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.651 #29 NEW cov: 12155 ft: 14154 corp: 13/385b lim: 50 exec/s: 0 rss: 72Mb L: 46/46 MS: 1 ChangeBit- 00:07:23.651 [2024-07-15 19:02:03.989826] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:217020518631670531 len:772 00:07:23.651 [2024-07-15 19:02:03.989858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.651 [2024-07-15 19:02:03.989929] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:217161825438073603 len:34696 00:07:23.651 [2024-07-15 19:02:03.989946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.651 [2024-07-15 19:02:03.990013] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:9728622933743994759 len:772 00:07:23.651 [2024-07-15 19:02:03.990030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.651 [2024-07-15 19:02:03.990118] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:217020518514230019 len:772 00:07:23.651 [2024-07-15 19:02:03.990136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.651 [2024-07-15 19:02:03.990228] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18375534216088649727 len:772 00:07:23.651 [2024-07-15 19:02:03.990259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:23.651 #30 NEW cov: 12155 ft: 14279 corp: 14/435b lim: 50 exec/s: 30 rss: 72Mb L: 50/50 MS: 1 InsertRepeatedBytes- 00:07:23.651 [2024-07-15 19:02:04.049980] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:217020518631670531 len:772 00:07:23.651 [2024-07-15 19:02:04.050010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.651 [2024-07-15 19:02:04.050103] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:217166223484584707 len:34696 00:07:23.651 [2024-07-15 19:02:04.050121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.651 [2024-07-15 19:02:04.050204] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:9727775197394077575 len:4 00:07:23.651 [2024-07-15 19:02:04.050225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.651 [2024-07-15 19:02:04.050312] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:217020518514230019 len:772 00:07:23.651 [2024-07-15 19:02:04.050346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.651 [2024-07-15 19:02:04.050437] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:217020518514230019 len:772 00:07:23.651 [2024-07-15 19:02:04.050459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:23.651 #31 NEW cov: 12155 ft: 14316 corp: 15/485b lim: 50 exec/s: 31 rss: 72Mb L: 50/50 MS: 1 InsertRepeatedBytes- 00:07:23.909 [2024-07-15 19:02:04.099767] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:217020518631670531 len:772 00:07:23.909 [2024-07-15 19:02:04.099801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.909 [2024-07-15 19:02:04.099865] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:217020518514230019 len:772 00:07:23.909 [2024-07-15 19:02:04.099885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.909 [2024-07-15 19:02:04.099940] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744069465151491 len:65284 00:07:23.909 [2024-07-15 19:02:04.099963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.909 #32 NEW cov: 12155 ft: 14397 corp: 16/521b lim: 50 exec/s: 32 rss: 72Mb L: 36/50 MS: 1 InsertRepeatedBytes- 00:07:23.909 [2024-07-15 19:02:04.150373] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:217020518631670531 len:772 00:07:23.909 [2024-07-15 19:02:04.150402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.909 [2024-07-15 19:02:04.150472] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:217020518514230019 len:772 00:07:23.909 [2024-07-15 19:02:04.150488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.909 [2024-07-15 19:02:04.150570] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:217020518514230019 len:772 00:07:23.909 [2024-07-15 19:02:04.150589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.909 [2024-07-15 19:02:04.150682] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:217020518514230019 len:772 00:07:23.909 [2024-07-15 19:02:04.150699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.909 #33 NEW cov: 12155 ft: 14409 corp: 17/570b lim: 50 exec/s: 33 rss: 73Mb L: 49/50 MS: 1 CopyPart- 00:07:23.909 [2024-07-15 19:02:04.209988] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16638239752757634790 len:59111 00:07:23.909 [2024-07-15 19:02:04.210018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.909 #34 NEW cov: 12155 ft: 14450 corp: 18/589b lim: 50 exec/s: 34 rss: 73Mb L: 19/50 MS: 1 InsertByte- 00:07:23.909 [2024-07-15 19:02:04.271192] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:217020518631670531 len:772 00:07:23.909 [2024-07-15 19:02:04.271229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.909 [2024-07-15 19:02:04.271305] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:217161825438073603 len:34696 00:07:23.909 [2024-07-15 19:02:04.271325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.909 [2024-07-15 19:02:04.271403] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:9728622933743994759 len:772 00:07:23.909 [2024-07-15 19:02:04.271424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.909 [2024-07-15 19:02:04.271511] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:217020518514230051 len:772 00:07:23.909 [2024-07-15 19:02:04.271530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.909 [2024-07-15 19:02:04.271618] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18375534216088649727 len:772 00:07:23.909 [2024-07-15 19:02:04.271637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:23.909 #35 NEW cov: 12155 ft: 14462 corp: 19/639b lim: 50 exec/s: 35 rss: 73Mb L: 50/50 MS: 1 ChangeBit- 00:07:23.909 [2024-07-15 19:02:04.331376] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:217298677893628675 len:65284 00:07:23.909 [2024-07-15 19:02:04.331411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.909 [2024-07-15 19:02:04.331474] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:217020518514230168 len:904 00:07:23.909 [2024-07-15 19:02:04.331494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.909 [2024-07-15 19:02:04.331578] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:9765923333140350855 len:772 00:07:23.909 [2024-07-15 19:02:04.331596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.909 [2024-07-15 19:02:04.331682] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:217020518514230019 len:772 00:07:23.909 [2024-07-15 19:02:04.331706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.168 #36 NEW cov: 12155 ft: 14484 corp: 20/688b lim: 50 exec/s: 36 rss: 73Mb L: 49/50 MS: 1 InsertRepeatedBytes- 00:07:24.168 [2024-07-15 19:02:04.391474] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:217020518634029827 len:772 00:07:24.168 [2024-07-15 19:02:04.391503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.168 [2024-07-15 19:02:04.391567] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:217020518514230019 len:772 00:07:24.168 [2024-07-15 19:02:04.391587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.168 [2024-07-15 19:02:04.391651] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:217020518514230019 len:772 00:07:24.168 [2024-07-15 19:02:04.391669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.168 #37 NEW cov: 12155 ft: 14558 corp: 21/720b lim: 50 exec/s: 37 rss: 73Mb L: 32/50 MS: 1 InsertByte- 00:07:24.168 [2024-07-15 19:02:04.442113] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:217020518631670531 len:772 00:07:24.168 [2024-07-15 19:02:04.442145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.168 [2024-07-15 19:02:04.442230] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:217166223484584707 len:42888 00:07:24.168 [2024-07-15 19:02:04.442248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.168 [2024-07-15 19:02:04.442313] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:9727775197394077575 len:4 00:07:24.168 [2024-07-15 19:02:04.442330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.168 [2024-07-15 19:02:04.442419] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:217020518514230019 len:772 00:07:24.168 [2024-07-15 19:02:04.442441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.168 [2024-07-15 19:02:04.442536] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:217020518514230019 len:772 00:07:24.168 [2024-07-15 19:02:04.442558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:24.168 #38 NEW cov: 12155 ft: 14604 corp: 22/770b lim: 50 exec/s: 38 rss: 73Mb L: 50/50 MS: 1 ChangeBit- 00:07:24.168 [2024-07-15 19:02:04.501364] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:217020518631670531 len:772 00:07:24.168 [2024-07-15 19:02:04.501396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.168 #39 NEW cov: 12155 ft: 14623 corp: 23/789b lim: 50 exec/s: 39 rss: 73Mb L: 19/50 MS: 1 PersAutoDict- DE: "\377\036"- 00:07:24.168 [2024-07-15 19:02:04.562509] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:217020518631670531 len:772 00:07:24.168 [2024-07-15 19:02:04.562537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.168 [2024-07-15 19:02:04.562610] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:258960290044117763 len:772 00:07:24.168 [2024-07-15 19:02:04.562630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.168 [2024-07-15 19:02:04.562709] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:217020518514230019 len:772 00:07:24.168 [2024-07-15 19:02:04.562726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.168 [2024-07-15 19:02:04.562821] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:217020518514230019 len:772 00:07:24.168 [2024-07-15 19:02:04.562841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.168 #40 NEW cov: 12155 ft: 14654 corp: 24/831b lim: 50 exec/s: 40 rss: 73Mb L: 42/50 MS: 1 CopyPart- 00:07:24.427 [2024-07-15 19:02:04.612964] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:217020518631670531 len:772 00:07:24.427 [2024-07-15 19:02:04.612993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.427 [2024-07-15 19:02:04.613059] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:217166206304715523 len:34696 00:07:24.427 [2024-07-15 19:02:04.613077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.427 [2024-07-15 19:02:04.613151] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:9728622933743994759 len:772 00:07:24.427 [2024-07-15 19:02:04.613171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.427 [2024-07-15 19:02:04.613270] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:217020518514230019 len:772 00:07:24.427 [2024-07-15 19:02:04.613290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.427 #41 NEW cov: 12155 ft: 14661 corp: 25/877b lim: 50 exec/s: 41 rss: 73Mb L: 46/50 MS: 1 ChangeBit- 00:07:24.427 [2024-07-15 19:02:04.662673] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16645056724849845990 len:59111 00:07:24.427 [2024-07-15 19:02:04.662701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.427 #42 NEW cov: 12155 ft: 14687 corp: 26/895b lim: 50 exec/s: 42 rss: 73Mb L: 18/50 MS: 1 PersAutoDict- DE: "\377\036"- 00:07:24.427 [2024-07-15 19:02:04.713772] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:217020518631670531 len:772 00:07:24.427 [2024-07-15 19:02:04.713802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.427 [2024-07-15 19:02:04.713869] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:216884731328004867 len:34696 00:07:24.427 [2024-07-15 19:02:04.713888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.427 [2024-07-15 19:02:04.713956] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:9728622933743994759 len:772 00:07:24.427 [2024-07-15 19:02:04.713975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.427 [2024-07-15 19:02:04.714062] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:217020518514230019 len:772 00:07:24.427 [2024-07-15 19:02:04.714083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.427 #43 NEW cov: 12155 ft: 14702 corp: 27/941b lim: 50 exec/s: 43 rss: 73Mb L: 46/50 MS: 1 ChangeBit- 00:07:24.427 [2024-07-15 19:02:04.774091] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:217020518631670531 len:772 00:07:24.427 [2024-07-15 19:02:04.774119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.427 [2024-07-15 19:02:04.774205] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:216172782164312835 len:1 00:07:24.427 [2024-07-15 19:02:04.774230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.427 [2024-07-15 19:02:04.774315] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:217020518463700992 len:772 00:07:24.427 [2024-07-15 19:02:04.774335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.427 [2024-07-15 19:02:04.774435] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744069474878463 len:772 00:07:24.427 [2024-07-15 19:02:04.774456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.427 #44 NEW cov: 12155 ft: 14713 corp: 28/986b lim: 50 exec/s: 44 rss: 73Mb L: 45/50 MS: 1 InsertRepeatedBytes- 00:07:24.427 [2024-07-15 19:02:04.844559] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:217020518631670531 len:772 00:07:24.427 [2024-07-15 19:02:04.844589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.427 [2024-07-15 19:02:04.844662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:216884731328004867 len:34696 00:07:24.427 [2024-07-15 19:02:04.844679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.427 [2024-07-15 19:02:04.844750] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:9728622933743994759 len:772 00:07:24.427 [2024-07-15 19:02:04.844767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.427 [2024-07-15 19:02:04.844853] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:217020518514230019 len:772 00:07:24.427 [2024-07-15 19:02:04.844872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.686 #45 NEW cov: 12155 ft: 14723 corp: 29/1034b lim: 50 exec/s: 45 rss: 73Mb L: 48/50 MS: 1 PersAutoDict- DE: "\377\036"- 00:07:24.686 [2024-07-15 19:02:04.905191] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:217020518631670531 len:772 00:07:24.686 [2024-07-15 19:02:04.905225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.686 [2024-07-15 19:02:04.905341] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:217166220984779672 len:34696 00:07:24.686 [2024-07-15 19:02:04.905362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.686 [2024-07-15 19:02:04.905445] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:9727775197394077575 len:4 00:07:24.686 [2024-07-15 19:02:04.905464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.686 [2024-07-15 19:02:04.905554] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:217020518514230019 len:772 00:07:24.686 [2024-07-15 19:02:04.905576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.686 [2024-07-15 19:02:04.905668] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:217020518514230019 len:772 00:07:24.686 [2024-07-15 19:02:04.905688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:24.686 #46 NEW cov: 12155 ft: 14747 corp: 30/1084b lim: 50 exec/s: 46 rss: 73Mb L: 50/50 MS: 1 ShuffleBytes- 00:07:24.686 [2024-07-15 19:02:04.955680] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:217020518631670531 len:12801 00:07:24.686 [2024-07-15 19:02:04.955707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.686 [2024-07-15 19:02:04.955830] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:217166220984779672 len:34696 00:07:24.686 [2024-07-15 19:02:04.955847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.686 [2024-07-15 19:02:04.955937] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:9727775197394077575 len:4 00:07:24.686 [2024-07-15 19:02:04.955956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.686 [2024-07-15 19:02:04.956045] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:217020518514230019 len:772 00:07:24.686 [2024-07-15 19:02:04.956062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.686 [2024-07-15 19:02:04.956160] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:217020518514230019 len:772 00:07:24.686 [2024-07-15 19:02:04.956181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:24.686 #47 NEW cov: 12155 ft: 14749 corp: 31/1134b lim: 50 exec/s: 47 rss: 73Mb L: 50/50 MS: 1 ChangeBinInt- 00:07:24.686 [2024-07-15 19:02:05.015776] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:9765922764098865031 len:772 00:07:24.686 [2024-07-15 19:02:05.015805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.686 [2024-07-15 19:02:05.015889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:217166220984779523 len:34696 00:07:24.686 [2024-07-15 19:02:05.015907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.686 [2024-07-15 19:02:05.015983] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:9728622933743994759 len:772 00:07:24.686 [2024-07-15 19:02:05.016003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.686 [2024-07-15 19:02:05.016093] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:217020518514230019 len:772 00:07:24.686 [2024-07-15 19:02:05.016114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.686 #48 NEW cov: 12155 ft: 14751 corp: 32/1180b lim: 50 exec/s: 24 rss: 74Mb L: 46/50 MS: 1 CopyPart- 00:07:24.686 #48 DONE cov: 12155 ft: 14751 corp: 32/1180b lim: 50 exec/s: 24 rss: 74Mb 00:07:24.686 ###### Recommended dictionary. ###### 00:07:24.686 "\377\036" # Uses: 3 00:07:24.686 ###### End of recommended dictionary. ###### 00:07:24.686 Done 48 runs in 2 second(s) 00:07:24.946 19:02:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:07:24.946 19:02:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:24.946 19:02:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:24.946 19:02:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:07:24.946 19:02:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:07:24.946 19:02:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:24.946 19:02:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:24.946 19:02:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:24.946 19:02:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:07:24.946 19:02:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:24.946 19:02:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:24.946 19:02:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:07:24.946 19:02:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:07:24.946 19:02:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:24.946 19:02:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:07:24.946 19:02:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:24.946 19:02:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:24.946 19:02:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:24.946 19:02:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:07:24.946 [2024-07-15 19:02:05.207281] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:24.946 [2024-07-15 19:02:05.207350] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid676114 ] 00:07:24.946 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.205 [2024-07-15 19:02:05.421212] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.205 [2024-07-15 19:02:05.494272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.205 [2024-07-15 19:02:05.554104] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.205 [2024-07-15 19:02:05.570434] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:25.205 INFO: Running with entropic power schedule (0xFF, 100). 00:07:25.205 INFO: Seed: 3148311126 00:07:25.205 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:25.205 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:25.205 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:25.205 INFO: A corpus is not provided, starting from an empty corpus 00:07:25.205 #2 INITED exec/s: 0 rss: 64Mb 00:07:25.205 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:25.205 This may also happen if the target rejected all inputs we tried so far 00:07:25.463 [2024-07-15 19:02:05.648680] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:25.463 [2024-07-15 19:02:05.648728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.463 [2024-07-15 19:02:05.648829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:25.463 [2024-07-15 19:02:05.648850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.463 [2024-07-15 19:02:05.648957] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:25.463 [2024-07-15 19:02:05.648980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.463 [2024-07-15 19:02:05.649074] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:25.463 [2024-07-15 19:02:05.649093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.463 [2024-07-15 19:02:05.649202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:25.464 [2024-07-15 19:02:05.649224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:25.722 NEW_FUNC[1/697]: 0x4a6130 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:07:25.722 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:25.722 #11 NEW cov: 11969 ft: 11969 corp: 2/91b lim: 90 exec/s: 0 rss: 72Mb L: 90/90 MS: 4 ShuffleBytes-InsertByte-CopyPart-InsertRepeatedBytes- 00:07:25.722 [2024-07-15 19:02:05.998712] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:25.722 [2024-07-15 19:02:05.998759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.722 [2024-07-15 19:02:05.998847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:25.722 [2024-07-15 19:02:05.998865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.722 [2024-07-15 19:02:05.998962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:25.722 [2024-07-15 19:02:05.998983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.722 #14 NEW cov: 12099 ft: 12946 corp: 3/148b lim: 90 exec/s: 0 rss: 72Mb L: 57/90 MS: 3 ChangeBit-CopyPart-CrossOver- 00:07:25.722 [2024-07-15 19:02:06.049776] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:25.722 [2024-07-15 19:02:06.049810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.722 [2024-07-15 19:02:06.049877] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:25.722 [2024-07-15 19:02:06.049896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.722 [2024-07-15 19:02:06.049970] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:25.722 [2024-07-15 19:02:06.049987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.722 [2024-07-15 19:02:06.050087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:25.722 [2024-07-15 19:02:06.050110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.722 [2024-07-15 19:02:06.050222] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:25.722 [2024-07-15 19:02:06.050241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:25.722 #15 NEW cov: 12105 ft: 13197 corp: 4/238b lim: 90 exec/s: 0 rss: 72Mb L: 90/90 MS: 1 ChangeBinInt- 00:07:25.722 [2024-07-15 19:02:06.109120] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:25.723 [2024-07-15 19:02:06.109151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.723 [2024-07-15 19:02:06.109261] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:25.723 [2024-07-15 19:02:06.109277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.723 #16 NEW cov: 12190 ft: 13747 corp: 5/287b lim: 90 exec/s: 0 rss: 72Mb L: 49/90 MS: 1 CrossOver- 00:07:25.981 [2024-07-15 19:02:06.169051] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:25.981 [2024-07-15 19:02:06.169083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.981 #21 NEW cov: 12190 ft: 14564 corp: 6/306b lim: 90 exec/s: 0 rss: 72Mb L: 19/90 MS: 5 InsertByte-ShuffleBytes-ChangeByte-ChangeBit-InsertRepeatedBytes- 00:07:25.981 [2024-07-15 19:02:06.220144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:25.981 [2024-07-15 19:02:06.220176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.981 [2024-07-15 19:02:06.220268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:25.981 [2024-07-15 19:02:06.220289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.981 [2024-07-15 19:02:06.220337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:25.981 [2024-07-15 19:02:06.220356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.981 [2024-07-15 19:02:06.220450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:25.981 [2024-07-15 19:02:06.220470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.981 #22 NEW cov: 12190 ft: 14626 corp: 7/393b lim: 90 exec/s: 0 rss: 72Mb L: 87/90 MS: 1 InsertRepeatedBytes- 00:07:25.981 [2024-07-15 19:02:06.269771] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:25.981 [2024-07-15 19:02:06.269806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.981 [2024-07-15 19:02:06.269888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:25.981 [2024-07-15 19:02:06.269909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.981 #23 NEW cov: 12190 ft: 14730 corp: 8/430b lim: 90 exec/s: 0 rss: 72Mb L: 37/90 MS: 1 InsertRepeatedBytes- 00:07:25.981 [2024-07-15 19:02:06.319914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:25.981 [2024-07-15 19:02:06.319946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.981 #24 NEW cov: 12190 ft: 14744 corp: 9/449b lim: 90 exec/s: 0 rss: 72Mb L: 19/90 MS: 1 ShuffleBytes- 00:07:25.981 [2024-07-15 19:02:06.381710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:25.981 [2024-07-15 19:02:06.381740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.981 [2024-07-15 19:02:06.381815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:25.981 [2024-07-15 19:02:06.381836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.981 [2024-07-15 19:02:06.381895] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:25.981 [2024-07-15 19:02:06.381914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.981 [2024-07-15 19:02:06.382007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:25.981 [2024-07-15 19:02:06.382027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.981 [2024-07-15 19:02:06.382117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:25.981 [2024-07-15 19:02:06.382137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:25.981 #25 NEW cov: 12190 ft: 14781 corp: 10/539b lim: 90 exec/s: 0 rss: 72Mb L: 90/90 MS: 1 ChangeByte- 00:07:26.240 [2024-07-15 19:02:06.430791] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:26.240 [2024-07-15 19:02:06.430823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.240 [2024-07-15 19:02:06.430900] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:26.240 [2024-07-15 19:02:06.430922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.240 #26 NEW cov: 12190 ft: 14814 corp: 11/589b lim: 90 exec/s: 0 rss: 72Mb L: 50/90 MS: 1 InsertByte- 00:07:26.240 [2024-07-15 19:02:06.491567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:26.240 [2024-07-15 19:02:06.491599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.240 [2024-07-15 19:02:06.491672] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:26.240 [2024-07-15 19:02:06.491696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.240 [2024-07-15 19:02:06.491756] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:26.240 [2024-07-15 19:02:06.491773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.240 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:26.240 #27 NEW cov: 12213 ft: 14853 corp: 12/649b lim: 90 exec/s: 0 rss: 73Mb L: 60/90 MS: 1 EraseBytes- 00:07:26.240 [2024-07-15 19:02:06.562449] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:26.240 [2024-07-15 19:02:06.562478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.240 [2024-07-15 19:02:06.562569] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:26.240 [2024-07-15 19:02:06.562589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.240 [2024-07-15 19:02:06.562691] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:26.240 [2024-07-15 19:02:06.562713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.240 [2024-07-15 19:02:06.562809] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:26.240 [2024-07-15 19:02:06.562830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.240 [2024-07-15 19:02:06.562931] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:26.240 [2024-07-15 19:02:06.562954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:26.240 #33 NEW cov: 12213 ft: 14886 corp: 13/739b lim: 90 exec/s: 0 rss: 73Mb L: 90/90 MS: 1 CrossOver- 00:07:26.240 [2024-07-15 19:02:06.621686] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:26.240 [2024-07-15 19:02:06.621715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.240 [2024-07-15 19:02:06.621810] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:26.240 [2024-07-15 19:02:06.621829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.240 #34 NEW cov: 12213 ft: 14901 corp: 14/785b lim: 90 exec/s: 34 rss: 73Mb L: 46/90 MS: 1 InsertRepeatedBytes- 00:07:26.498 [2024-07-15 19:02:06.681575] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:26.498 [2024-07-15 19:02:06.681611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.498 #35 NEW cov: 12213 ft: 14924 corp: 15/803b lim: 90 exec/s: 35 rss: 73Mb L: 18/90 MS: 1 EraseBytes- 00:07:26.498 [2024-07-15 19:02:06.731780] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:26.498 [2024-07-15 19:02:06.731812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.498 #36 NEW cov: 12213 ft: 14947 corp: 16/823b lim: 90 exec/s: 36 rss: 73Mb L: 20/90 MS: 1 InsertByte- 00:07:26.498 [2024-07-15 19:02:06.792329] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:26.498 [2024-07-15 19:02:06.792358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.498 [2024-07-15 19:02:06.792436] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:26.498 [2024-07-15 19:02:06.792455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.498 #37 NEW cov: 12213 ft: 14975 corp: 17/867b lim: 90 exec/s: 37 rss: 73Mb L: 44/90 MS: 1 EraseBytes- 00:07:26.498 [2024-07-15 19:02:06.842158] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:26.498 [2024-07-15 19:02:06.842188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.498 #38 NEW cov: 12213 ft: 14997 corp: 18/902b lim: 90 exec/s: 38 rss: 73Mb L: 35/90 MS: 1 EraseBytes- 00:07:26.498 [2024-07-15 19:02:06.912560] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:26.498 [2024-07-15 19:02:06.912594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.756 #44 NEW cov: 12213 ft: 15023 corp: 19/921b lim: 90 exec/s: 44 rss: 73Mb L: 19/90 MS: 1 ShuffleBytes- 00:07:26.756 [2024-07-15 19:02:06.963112] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:26.756 [2024-07-15 19:02:06.963145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.756 [2024-07-15 19:02:06.963237] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:26.756 [2024-07-15 19:02:06.963259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.756 #45 NEW cov: 12213 ft: 15067 corp: 20/965b lim: 90 exec/s: 45 rss: 73Mb L: 44/90 MS: 1 ShuffleBytes- 00:07:26.756 [2024-07-15 19:02:07.033804] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:26.756 [2024-07-15 19:02:07.033837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.756 [2024-07-15 19:02:07.033912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:26.756 [2024-07-15 19:02:07.033933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.756 [2024-07-15 19:02:07.034009] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:26.756 [2024-07-15 19:02:07.034026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.756 #46 NEW cov: 12213 ft: 15071 corp: 21/1030b lim: 90 exec/s: 46 rss: 73Mb L: 65/90 MS: 1 InsertRepeatedBytes- 00:07:26.756 [2024-07-15 19:02:07.103673] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:26.756 [2024-07-15 19:02:07.103705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.756 [2024-07-15 19:02:07.103816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:26.756 [2024-07-15 19:02:07.103838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.756 #47 NEW cov: 12213 ft: 15104 corp: 22/1079b lim: 90 exec/s: 47 rss: 73Mb L: 49/90 MS: 1 ChangeBit- 00:07:26.756 [2024-07-15 19:02:07.154804] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:26.756 [2024-07-15 19:02:07.154839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.756 [2024-07-15 19:02:07.154930] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:26.756 [2024-07-15 19:02:07.154951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.756 [2024-07-15 19:02:07.155017] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:26.756 [2024-07-15 19:02:07.155033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.756 [2024-07-15 19:02:07.155131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:26.756 [2024-07-15 19:02:07.155151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.756 [2024-07-15 19:02:07.155243] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:26.756 [2024-07-15 19:02:07.155264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:26.756 #48 NEW cov: 12213 ft: 15133 corp: 23/1169b lim: 90 exec/s: 48 rss: 73Mb L: 90/90 MS: 1 ChangeBinInt- 00:07:27.014 [2024-07-15 19:02:07.204039] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:27.014 [2024-07-15 19:02:07.204071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.014 [2024-07-15 19:02:07.204165] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:27.014 [2024-07-15 19:02:07.204187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.014 #54 NEW cov: 12213 ft: 15159 corp: 24/1218b lim: 90 exec/s: 54 rss: 73Mb L: 49/90 MS: 1 ShuffleBytes- 00:07:27.014 [2024-07-15 19:02:07.254198] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:27.014 [2024-07-15 19:02:07.254233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.014 [2024-07-15 19:02:07.254335] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:27.014 [2024-07-15 19:02:07.254350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.014 #55 NEW cov: 12213 ft: 15189 corp: 25/1262b lim: 90 exec/s: 55 rss: 73Mb L: 44/90 MS: 1 EraseBytes- 00:07:27.014 [2024-07-15 19:02:07.305048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:27.014 [2024-07-15 19:02:07.305081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.014 [2024-07-15 19:02:07.305152] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:27.014 [2024-07-15 19:02:07.305172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.014 [2024-07-15 19:02:07.305235] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:27.014 [2024-07-15 19:02:07.305255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.014 #56 NEW cov: 12213 ft: 15210 corp: 26/1319b lim: 90 exec/s: 56 rss: 73Mb L: 57/90 MS: 1 ShuffleBytes- 00:07:27.014 [2024-07-15 19:02:07.354650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:27.014 [2024-07-15 19:02:07.354682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.014 #57 NEW cov: 12213 ft: 15229 corp: 27/1340b lim: 90 exec/s: 57 rss: 73Mb L: 21/90 MS: 1 InsertByte- 00:07:27.014 [2024-07-15 19:02:07.415878] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:27.014 [2024-07-15 19:02:07.415910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.014 [2024-07-15 19:02:07.415988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:27.014 [2024-07-15 19:02:07.416008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.014 [2024-07-15 19:02:07.416071] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:27.014 [2024-07-15 19:02:07.416089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.273 #58 NEW cov: 12213 ft: 15312 corp: 28/1400b lim: 90 exec/s: 58 rss: 73Mb L: 60/90 MS: 1 EraseBytes- 00:07:27.273 [2024-07-15 19:02:07.476413] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:27.273 [2024-07-15 19:02:07.476443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.273 [2024-07-15 19:02:07.476518] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:27.273 [2024-07-15 19:02:07.476539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.273 [2024-07-15 19:02:07.476605] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:27.273 [2024-07-15 19:02:07.476629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.273 #59 NEW cov: 12213 ft: 15326 corp: 29/1458b lim: 90 exec/s: 59 rss: 73Mb L: 58/90 MS: 1 InsertRepeatedBytes- 00:07:27.273 [2024-07-15 19:02:07.526007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:27.273 [2024-07-15 19:02:07.526037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.273 #60 NEW cov: 12213 ft: 15334 corp: 30/1476b lim: 90 exec/s: 60 rss: 74Mb L: 18/90 MS: 1 ChangeBit- 00:07:27.273 [2024-07-15 19:02:07.587260] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:27.273 [2024-07-15 19:02:07.587289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.273 [2024-07-15 19:02:07.587377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:27.273 [2024-07-15 19:02:07.587401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.273 [2024-07-15 19:02:07.587467] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:27.273 [2024-07-15 19:02:07.587482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.273 #61 NEW cov: 12213 ft: 15340 corp: 31/1536b lim: 90 exec/s: 61 rss: 74Mb L: 60/90 MS: 1 ShuffleBytes- 00:07:27.273 [2024-07-15 19:02:07.637368] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:27.273 [2024-07-15 19:02:07.637400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.273 [2024-07-15 19:02:07.637470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:27.273 [2024-07-15 19:02:07.637490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.273 [2024-07-15 19:02:07.637552] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:27.273 [2024-07-15 19:02:07.637571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.273 #62 NEW cov: 12213 ft: 15353 corp: 32/1594b lim: 90 exec/s: 31 rss: 74Mb L: 58/90 MS: 1 ChangeBit- 00:07:27.273 #62 DONE cov: 12213 ft: 15353 corp: 32/1594b lim: 90 exec/s: 31 rss: 74Mb 00:07:27.273 Done 62 runs in 2 second(s) 00:07:27.531 19:02:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:07:27.531 19:02:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:27.531 19:02:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:27.531 19:02:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:07:27.532 19:02:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:07:27.532 19:02:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:27.532 19:02:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:27.532 19:02:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:27.532 19:02:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:07:27.532 19:02:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:27.532 19:02:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:27.532 19:02:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:07:27.532 19:02:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:07:27.532 19:02:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:27.532 19:02:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:07:27.532 19:02:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:27.532 19:02:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:27.532 19:02:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:27.532 19:02:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:07:27.532 [2024-07-15 19:02:07.843143] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:27.532 [2024-07-15 19:02:07.843234] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid676485 ] 00:07:27.532 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.790 [2024-07-15 19:02:08.055290] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.790 [2024-07-15 19:02:08.125148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.790 [2024-07-15 19:02:08.184719] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.790 [2024-07-15 19:02:08.201001] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:07:27.790 INFO: Running with entropic power schedule (0xFF, 100). 00:07:27.790 INFO: Seed: 1483384104 00:07:28.049 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:28.049 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:28.049 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:28.049 INFO: A corpus is not provided, starting from an empty corpus 00:07:28.049 #2 INITED exec/s: 0 rss: 65Mb 00:07:28.049 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:28.049 This may also happen if the target rejected all inputs we tried so far 00:07:28.049 [2024-07-15 19:02:08.277974] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:28.049 [2024-07-15 19:02:08.278017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.308 NEW_FUNC[1/697]: 0x4a9350 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:07:28.308 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:28.308 #15 NEW cov: 11944 ft: 11921 corp: 2/11b lim: 50 exec/s: 0 rss: 72Mb L: 10/10 MS: 3 InsertByte-CopyPart-InsertRepeatedBytes- 00:07:28.308 [2024-07-15 19:02:08.627827] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:28.308 [2024-07-15 19:02:08.627893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.308 [2024-07-15 19:02:08.627979] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:28.308 [2024-07-15 19:02:08.628009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.308 [2024-07-15 19:02:08.628091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:28.308 [2024-07-15 19:02:08.628119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.308 [2024-07-15 19:02:08.628206] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:28.308 [2024-07-15 19:02:08.628244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.308 #17 NEW cov: 12074 ft: 13346 corp: 3/53b lim: 50 exec/s: 0 rss: 72Mb L: 42/42 MS: 2 EraseBytes-InsertRepeatedBytes- 00:07:28.308 [2024-07-15 19:02:08.687215] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:28.308 [2024-07-15 19:02:08.687251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.308 #23 NEW cov: 12080 ft: 13712 corp: 4/64b lim: 50 exec/s: 0 rss: 72Mb L: 11/42 MS: 1 CrossOver- 00:07:28.308 [2024-07-15 19:02:08.727327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:28.308 [2024-07-15 19:02:08.727360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.567 #24 NEW cov: 12165 ft: 14047 corp: 5/78b lim: 50 exec/s: 0 rss: 72Mb L: 14/42 MS: 1 CopyPart- 00:07:28.567 [2024-07-15 19:02:08.767437] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:28.567 [2024-07-15 19:02:08.767468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.567 #25 NEW cov: 12165 ft: 14131 corp: 6/93b lim: 50 exec/s: 0 rss: 72Mb L: 15/42 MS: 1 InsertByte- 00:07:28.567 [2024-07-15 19:02:08.817969] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:28.567 [2024-07-15 19:02:08.817998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.567 [2024-07-15 19:02:08.818039] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:28.567 [2024-07-15 19:02:08.818054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.567 [2024-07-15 19:02:08.818108] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:28.567 [2024-07-15 19:02:08.818124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.567 [2024-07-15 19:02:08.818179] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:28.567 [2024-07-15 19:02:08.818195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.567 #26 NEW cov: 12165 ft: 14219 corp: 7/135b lim: 50 exec/s: 0 rss: 72Mb L: 42/42 MS: 1 CrossOver- 00:07:28.567 [2024-07-15 19:02:08.867712] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:28.567 [2024-07-15 19:02:08.867740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.567 #27 NEW cov: 12165 ft: 14315 corp: 8/149b lim: 50 exec/s: 0 rss: 72Mb L: 14/42 MS: 1 ChangeByte- 00:07:28.567 [2024-07-15 19:02:08.908236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:28.567 [2024-07-15 19:02:08.908262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.567 [2024-07-15 19:02:08.908325] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:28.567 [2024-07-15 19:02:08.908341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.567 [2024-07-15 19:02:08.908398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:28.567 [2024-07-15 19:02:08.908413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.567 [2024-07-15 19:02:08.908469] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:28.567 [2024-07-15 19:02:08.908486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.567 #31 NEW cov: 12165 ft: 14358 corp: 9/194b lim: 50 exec/s: 0 rss: 72Mb L: 45/45 MS: 4 EraseBytes-ChangeByte-ChangeBit-InsertRepeatedBytes- 00:07:28.567 [2024-07-15 19:02:08.947948] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:28.567 [2024-07-15 19:02:08.947975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.567 #32 NEW cov: 12165 ft: 14411 corp: 10/208b lim: 50 exec/s: 0 rss: 72Mb L: 14/45 MS: 1 ShuffleBytes- 00:07:28.567 [2024-07-15 19:02:08.988438] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:28.567 [2024-07-15 19:02:08.988465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.567 [2024-07-15 19:02:08.988501] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:28.567 [2024-07-15 19:02:08.988517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.567 [2024-07-15 19:02:08.988572] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:28.567 [2024-07-15 19:02:08.988587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.567 [2024-07-15 19:02:08.988642] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:28.567 [2024-07-15 19:02:08.988658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.826 #33 NEW cov: 12165 ft: 14480 corp: 11/253b lim: 50 exec/s: 0 rss: 72Mb L: 45/45 MS: 1 ShuffleBytes- 00:07:28.826 [2024-07-15 19:02:09.038166] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:28.826 [2024-07-15 19:02:09.038193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.826 #34 NEW cov: 12165 ft: 14509 corp: 12/269b lim: 50 exec/s: 0 rss: 72Mb L: 16/45 MS: 1 CrossOver- 00:07:28.826 [2024-07-15 19:02:09.088265] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:28.826 [2024-07-15 19:02:09.088293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.826 #35 NEW cov: 12165 ft: 14533 corp: 13/283b lim: 50 exec/s: 0 rss: 72Mb L: 14/45 MS: 1 CopyPart- 00:07:28.826 [2024-07-15 19:02:09.138393] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:28.826 [2024-07-15 19:02:09.138421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.826 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:28.826 #36 NEW cov: 12188 ft: 14575 corp: 14/297b lim: 50 exec/s: 0 rss: 73Mb L: 14/45 MS: 1 ChangeByte- 00:07:28.826 [2024-07-15 19:02:09.188545] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:28.826 [2024-07-15 19:02:09.188573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.826 #42 NEW cov: 12188 ft: 14590 corp: 15/313b lim: 50 exec/s: 0 rss: 73Mb L: 16/45 MS: 1 CopyPart- 00:07:28.826 [2024-07-15 19:02:09.239330] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:28.826 [2024-07-15 19:02:09.239357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.826 [2024-07-15 19:02:09.239404] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:28.826 [2024-07-15 19:02:09.239418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.826 [2024-07-15 19:02:09.239473] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:28.826 [2024-07-15 19:02:09.239489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.826 [2024-07-15 19:02:09.239541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:28.826 [2024-07-15 19:02:09.239557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.826 [2024-07-15 19:02:09.239610] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:28.826 [2024-07-15 19:02:09.239625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:29.086 #43 NEW cov: 12188 ft: 14680 corp: 16/363b lim: 50 exec/s: 43 rss: 73Mb L: 50/50 MS: 1 InsertRepeatedBytes- 00:07:29.086 [2024-07-15 19:02:09.289283] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:29.086 [2024-07-15 19:02:09.289310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.086 [2024-07-15 19:02:09.289372] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:29.086 [2024-07-15 19:02:09.289388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.086 [2024-07-15 19:02:09.289442] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:29.086 [2024-07-15 19:02:09.289457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.086 [2024-07-15 19:02:09.289511] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:29.086 [2024-07-15 19:02:09.289528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.086 #44 NEW cov: 12188 ft: 14687 corp: 17/405b lim: 50 exec/s: 44 rss: 73Mb L: 42/50 MS: 1 ChangeBit- 00:07:29.086 [2024-07-15 19:02:09.329510] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:29.086 [2024-07-15 19:02:09.329536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.086 [2024-07-15 19:02:09.329599] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:29.086 [2024-07-15 19:02:09.329615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.086 [2024-07-15 19:02:09.329667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:29.086 [2024-07-15 19:02:09.329683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.086 [2024-07-15 19:02:09.329737] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:29.086 [2024-07-15 19:02:09.329751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.086 [2024-07-15 19:02:09.329808] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:29.086 [2024-07-15 19:02:09.329824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:29.086 #45 NEW cov: 12188 ft: 14720 corp: 18/455b lim: 50 exec/s: 45 rss: 73Mb L: 50/50 MS: 1 ChangeBinInt- 00:07:29.086 [2024-07-15 19:02:09.379547] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:29.086 [2024-07-15 19:02:09.379574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.086 [2024-07-15 19:02:09.379637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:29.086 [2024-07-15 19:02:09.379654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.086 [2024-07-15 19:02:09.379708] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:29.086 [2024-07-15 19:02:09.379724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.086 [2024-07-15 19:02:09.379777] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:29.086 [2024-07-15 19:02:09.379793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.086 #46 NEW cov: 12188 ft: 14736 corp: 19/500b lim: 50 exec/s: 46 rss: 73Mb L: 45/50 MS: 1 ShuffleBytes- 00:07:29.086 [2024-07-15 19:02:09.419334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:29.086 [2024-07-15 19:02:09.419361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.086 [2024-07-15 19:02:09.419412] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:29.086 [2024-07-15 19:02:09.419427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.086 #47 NEW cov: 12188 ft: 15047 corp: 20/529b lim: 50 exec/s: 47 rss: 73Mb L: 29/50 MS: 1 EraseBytes- 00:07:29.086 [2024-07-15 19:02:09.459368] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:29.086 [2024-07-15 19:02:09.459395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.086 #48 NEW cov: 12188 ft: 15056 corp: 21/539b lim: 50 exec/s: 48 rss: 73Mb L: 10/50 MS: 1 EraseBytes- 00:07:29.086 [2024-07-15 19:02:09.499859] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:29.086 [2024-07-15 19:02:09.499886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.086 [2024-07-15 19:02:09.499946] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:29.086 [2024-07-15 19:02:09.499962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.086 [2024-07-15 19:02:09.500016] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:29.086 [2024-07-15 19:02:09.500030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.086 [2024-07-15 19:02:09.500086] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:29.086 [2024-07-15 19:02:09.500101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.345 #49 NEW cov: 12188 ft: 15078 corp: 22/581b lim: 50 exec/s: 49 rss: 73Mb L: 42/50 MS: 1 ChangeBinInt- 00:07:29.345 [2024-07-15 19:02:09.549574] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:29.345 [2024-07-15 19:02:09.549601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.345 #50 NEW cov: 12188 ft: 15097 corp: 23/598b lim: 50 exec/s: 50 rss: 73Mb L: 17/50 MS: 1 InsertByte- 00:07:29.345 [2024-07-15 19:02:09.589698] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:29.345 [2024-07-15 19:02:09.589726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.345 #51 NEW cov: 12188 ft: 15112 corp: 24/616b lim: 50 exec/s: 51 rss: 73Mb L: 18/50 MS: 1 CMP- DE: "\001\000\000\010"- 00:07:29.345 [2024-07-15 19:02:09.639822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:29.345 [2024-07-15 19:02:09.639851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.345 #52 NEW cov: 12188 ft: 15123 corp: 25/632b lim: 50 exec/s: 52 rss: 73Mb L: 16/50 MS: 1 ChangeBit- 00:07:29.345 [2024-07-15 19:02:09.680564] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:29.345 [2024-07-15 19:02:09.680593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.345 [2024-07-15 19:02:09.680639] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:29.345 [2024-07-15 19:02:09.680655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.345 [2024-07-15 19:02:09.680708] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:29.345 [2024-07-15 19:02:09.680722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.345 [2024-07-15 19:02:09.680774] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:29.345 [2024-07-15 19:02:09.680789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.345 [2024-07-15 19:02:09.680842] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:29.345 [2024-07-15 19:02:09.680857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:29.345 #53 NEW cov: 12188 ft: 15212 corp: 26/682b lim: 50 exec/s: 53 rss: 73Mb L: 50/50 MS: 1 InsertRepeatedBytes- 00:07:29.345 [2024-07-15 19:02:09.720053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:29.345 [2024-07-15 19:02:09.720081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.345 #54 NEW cov: 12188 ft: 15376 corp: 27/696b lim: 50 exec/s: 54 rss: 73Mb L: 14/50 MS: 1 PersAutoDict- DE: "\001\000\000\010"- 00:07:29.345 [2024-07-15 19:02:09.770674] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:29.345 [2024-07-15 19:02:09.770705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.345 [2024-07-15 19:02:09.770743] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:29.345 [2024-07-15 19:02:09.770761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.345 [2024-07-15 19:02:09.770815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:29.345 [2024-07-15 19:02:09.770830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.345 [2024-07-15 19:02:09.770886] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:29.345 [2024-07-15 19:02:09.770902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.604 #55 NEW cov: 12188 ft: 15389 corp: 28/738b lim: 50 exec/s: 55 rss: 73Mb L: 42/50 MS: 1 ShuffleBytes- 00:07:29.604 [2024-07-15 19:02:09.820371] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:29.604 [2024-07-15 19:02:09.820401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.604 #56 NEW cov: 12188 ft: 15411 corp: 29/748b lim: 50 exec/s: 56 rss: 73Mb L: 10/50 MS: 1 CrossOver- 00:07:29.604 [2024-07-15 19:02:09.860586] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:29.604 [2024-07-15 19:02:09.860618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.604 [2024-07-15 19:02:09.860671] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:29.604 [2024-07-15 19:02:09.860686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.604 #60 NEW cov: 12188 ft: 15448 corp: 30/770b lim: 50 exec/s: 60 rss: 73Mb L: 22/50 MS: 4 EraseBytes-PersAutoDict-ChangeBinInt-CrossOver- DE: "\001\000\000\010"- 00:07:29.604 [2024-07-15 19:02:09.910675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:29.604 [2024-07-15 19:02:09.910705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.604 #61 NEW cov: 12188 ft: 15452 corp: 31/785b lim: 50 exec/s: 61 rss: 73Mb L: 15/50 MS: 1 ChangeByte- 00:07:29.604 [2024-07-15 19:02:09.950719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:29.604 [2024-07-15 19:02:09.950748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.604 #62 NEW cov: 12188 ft: 15459 corp: 32/801b lim: 50 exec/s: 62 rss: 73Mb L: 16/50 MS: 1 CMP- DE: "\031\000\000\000"- 00:07:29.604 [2024-07-15 19:02:09.990868] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:29.604 [2024-07-15 19:02:09.990896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.604 #63 NEW cov: 12188 ft: 15461 corp: 33/811b lim: 50 exec/s: 63 rss: 73Mb L: 10/50 MS: 1 ChangeByte- 00:07:29.604 [2024-07-15 19:02:10.030937] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:29.604 [2024-07-15 19:02:10.030970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.863 #64 NEW cov: 12188 ft: 15477 corp: 34/829b lim: 50 exec/s: 64 rss: 74Mb L: 18/50 MS: 1 CopyPart- 00:07:29.863 [2024-07-15 19:02:10.091457] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:29.863 [2024-07-15 19:02:10.091497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.863 [2024-07-15 19:02:10.091554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:29.863 [2024-07-15 19:02:10.091571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.863 [2024-07-15 19:02:10.091623] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:29.863 [2024-07-15 19:02:10.091639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.863 #65 NEW cov: 12188 ft: 15723 corp: 35/868b lim: 50 exec/s: 65 rss: 74Mb L: 39/50 MS: 1 CopyPart- 00:07:29.863 [2024-07-15 19:02:10.141235] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:29.863 [2024-07-15 19:02:10.141263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.863 #66 NEW cov: 12188 ft: 15728 corp: 36/882b lim: 50 exec/s: 66 rss: 74Mb L: 14/50 MS: 1 CopyPart- 00:07:29.863 [2024-07-15 19:02:10.181324] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:29.863 [2024-07-15 19:02:10.181352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.863 #67 NEW cov: 12188 ft: 15785 corp: 37/896b lim: 50 exec/s: 67 rss: 74Mb L: 14/50 MS: 1 CopyPart- 00:07:29.863 [2024-07-15 19:02:10.221412] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:29.863 [2024-07-15 19:02:10.221439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.863 #68 NEW cov: 12188 ft: 15798 corp: 38/912b lim: 50 exec/s: 34 rss: 74Mb L: 16/50 MS: 1 PersAutoDict- DE: "\001\000\000\010"- 00:07:29.863 #68 DONE cov: 12188 ft: 15798 corp: 38/912b lim: 50 exec/s: 34 rss: 74Mb 00:07:29.863 ###### Recommended dictionary. ###### 00:07:29.863 "\001\000\000\010" # Uses: 3 00:07:29.863 "\031\000\000\000" # Uses: 0 00:07:29.863 ###### End of recommended dictionary. ###### 00:07:29.863 Done 68 runs in 2 second(s) 00:07:30.123 19:02:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:07:30.123 19:02:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:30.123 19:02:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:30.123 19:02:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:07:30.123 19:02:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:07:30.123 19:02:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:30.123 19:02:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:30.123 19:02:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:30.123 19:02:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:07:30.123 19:02:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:30.123 19:02:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:30.123 19:02:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:07:30.123 19:02:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:07:30.123 19:02:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:30.123 19:02:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:07:30.123 19:02:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:30.123 19:02:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:30.123 19:02:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:30.123 19:02:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:07:30.123 [2024-07-15 19:02:10.440338] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:30.123 [2024-07-15 19:02:10.440440] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid676850 ] 00:07:30.123 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.396 [2024-07-15 19:02:10.650596] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.396 [2024-07-15 19:02:10.719680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.396 [2024-07-15 19:02:10.779232] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.396 [2024-07-15 19:02:10.795534] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:07:30.396 INFO: Running with entropic power schedule (0xFF, 100). 00:07:30.396 INFO: Seed: 4079355857 00:07:30.654 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:30.654 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:30.654 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:30.654 INFO: A corpus is not provided, starting from an empty corpus 00:07:30.654 #2 INITED exec/s: 0 rss: 65Mb 00:07:30.654 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:30.654 This may also happen if the target rejected all inputs we tried so far 00:07:30.654 [2024-07-15 19:02:10.860621] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:30.654 [2024-07-15 19:02:10.860653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.912 NEW_FUNC[1/697]: 0x4ab610 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:07:30.912 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:30.912 #28 NEW cov: 11970 ft: 11969 corp: 2/32b lim: 85 exec/s: 0 rss: 72Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:07:30.912 [2024-07-15 19:02:11.202320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:30.912 [2024-07-15 19:02:11.202409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.912 [2024-07-15 19:02:11.202522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:30.912 [2024-07-15 19:02:11.202564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.912 [2024-07-15 19:02:11.202671] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:30.912 [2024-07-15 19:02:11.202711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.912 #30 NEW cov: 12100 ft: 13464 corp: 3/91b lim: 85 exec/s: 0 rss: 72Mb L: 59/59 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:30.912 [2024-07-15 19:02:11.261970] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:30.912 [2024-07-15 19:02:11.262002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.912 [2024-07-15 19:02:11.262048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:30.912 [2024-07-15 19:02:11.262064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.912 [2024-07-15 19:02:11.262119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:30.912 [2024-07-15 19:02:11.262136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.912 #31 NEW cov: 12106 ft: 13697 corp: 4/150b lim: 85 exec/s: 0 rss: 72Mb L: 59/59 MS: 1 ChangeByte- 00:07:30.912 [2024-07-15 19:02:11.312231] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:30.912 [2024-07-15 19:02:11.312261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.912 [2024-07-15 19:02:11.312319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:30.912 [2024-07-15 19:02:11.312335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.912 [2024-07-15 19:02:11.312394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:30.912 [2024-07-15 19:02:11.312410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.912 [2024-07-15 19:02:11.312466] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:30.912 [2024-07-15 19:02:11.312482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.170 #32 NEW cov: 12191 ft: 14423 corp: 5/232b lim: 85 exec/s: 0 rss: 72Mb L: 82/82 MS: 1 CopyPart- 00:07:31.170 [2024-07-15 19:02:11.372432] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:31.170 [2024-07-15 19:02:11.372462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.170 [2024-07-15 19:02:11.372499] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:31.170 [2024-07-15 19:02:11.372515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.170 [2024-07-15 19:02:11.372572] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:31.170 [2024-07-15 19:02:11.372587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.170 [2024-07-15 19:02:11.372645] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:31.170 [2024-07-15 19:02:11.372660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.170 #33 NEW cov: 12191 ft: 14545 corp: 6/315b lim: 85 exec/s: 0 rss: 72Mb L: 83/83 MS: 1 InsertByte- 00:07:31.170 [2024-07-15 19:02:11.422414] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:31.170 [2024-07-15 19:02:11.422444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.170 [2024-07-15 19:02:11.422483] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:31.170 [2024-07-15 19:02:11.422500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.170 [2024-07-15 19:02:11.422559] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:31.170 [2024-07-15 19:02:11.422575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.170 #34 NEW cov: 12191 ft: 14621 corp: 7/374b lim: 85 exec/s: 0 rss: 72Mb L: 59/83 MS: 1 ShuffleBytes- 00:07:31.170 [2024-07-15 19:02:11.462526] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:31.170 [2024-07-15 19:02:11.462554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.170 [2024-07-15 19:02:11.462591] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:31.170 [2024-07-15 19:02:11.462607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.170 [2024-07-15 19:02:11.462665] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:31.170 [2024-07-15 19:02:11.462681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.170 #35 NEW cov: 12191 ft: 14736 corp: 8/433b lim: 85 exec/s: 0 rss: 72Mb L: 59/83 MS: 1 ChangeBit- 00:07:31.170 [2024-07-15 19:02:11.502600] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:31.170 [2024-07-15 19:02:11.502631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.170 [2024-07-15 19:02:11.502682] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:31.170 [2024-07-15 19:02:11.502698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.170 [2024-07-15 19:02:11.502756] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:31.170 [2024-07-15 19:02:11.502773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.170 #36 NEW cov: 12191 ft: 14756 corp: 9/496b lim: 85 exec/s: 0 rss: 72Mb L: 63/83 MS: 1 CMP- DE: "\377\377\377\003"- 00:07:31.170 [2024-07-15 19:02:11.542842] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:31.170 [2024-07-15 19:02:11.542868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.170 [2024-07-15 19:02:11.542933] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:31.170 [2024-07-15 19:02:11.542950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.170 [2024-07-15 19:02:11.543007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:31.170 [2024-07-15 19:02:11.543024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.170 [2024-07-15 19:02:11.543080] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:31.170 [2024-07-15 19:02:11.543097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.170 #37 NEW cov: 12191 ft: 14774 corp: 10/574b lim: 85 exec/s: 0 rss: 73Mb L: 78/83 MS: 1 InsertRepeatedBytes- 00:07:31.170 [2024-07-15 19:02:11.592871] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:31.170 [2024-07-15 19:02:11.592899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.170 [2024-07-15 19:02:11.592937] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:31.170 [2024-07-15 19:02:11.592953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.170 [2024-07-15 19:02:11.593009] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:31.170 [2024-07-15 19:02:11.593025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.427 #38 NEW cov: 12191 ft: 14848 corp: 11/628b lim: 85 exec/s: 0 rss: 73Mb L: 54/83 MS: 1 EraseBytes- 00:07:31.427 [2024-07-15 19:02:11.643007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:31.427 [2024-07-15 19:02:11.643035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.427 [2024-07-15 19:02:11.643095] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:31.427 [2024-07-15 19:02:11.643111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.427 [2024-07-15 19:02:11.643168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:31.427 [2024-07-15 19:02:11.643185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.427 #39 NEW cov: 12191 ft: 14908 corp: 12/687b lim: 85 exec/s: 0 rss: 73Mb L: 59/83 MS: 1 ShuffleBytes- 00:07:31.427 [2024-07-15 19:02:11.682966] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:31.427 [2024-07-15 19:02:11.682994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.427 [2024-07-15 19:02:11.683038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:31.427 [2024-07-15 19:02:11.683054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.427 #40 NEW cov: 12191 ft: 15249 corp: 13/729b lim: 85 exec/s: 0 rss: 73Mb L: 42/83 MS: 1 EraseBytes- 00:07:31.427 [2024-07-15 19:02:11.733443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:31.427 [2024-07-15 19:02:11.733469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.427 [2024-07-15 19:02:11.733536] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:31.427 [2024-07-15 19:02:11.733552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.427 [2024-07-15 19:02:11.733609] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:31.427 [2024-07-15 19:02:11.733625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.427 [2024-07-15 19:02:11.733683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:31.427 [2024-07-15 19:02:11.733698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.427 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:31.427 #41 NEW cov: 12214 ft: 15284 corp: 14/812b lim: 85 exec/s: 0 rss: 73Mb L: 83/83 MS: 1 InsertByte- 00:07:31.427 [2024-07-15 19:02:11.773271] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:31.427 [2024-07-15 19:02:11.773298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.427 [2024-07-15 19:02:11.773345] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:31.427 [2024-07-15 19:02:11.773362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.427 #42 NEW cov: 12214 ft: 15429 corp: 15/854b lim: 85 exec/s: 0 rss: 73Mb L: 42/83 MS: 1 ChangeBinInt- 00:07:31.427 [2024-07-15 19:02:11.823722] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:31.427 [2024-07-15 19:02:11.823749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.427 [2024-07-15 19:02:11.823817] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:31.427 [2024-07-15 19:02:11.823834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.427 [2024-07-15 19:02:11.823892] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:31.427 [2024-07-15 19:02:11.823908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.427 [2024-07-15 19:02:11.823966] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:31.427 [2024-07-15 19:02:11.823982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.685 #43 NEW cov: 12214 ft: 15438 corp: 16/932b lim: 85 exec/s: 43 rss: 73Mb L: 78/83 MS: 1 ChangeBit- 00:07:31.685 [2024-07-15 19:02:11.873821] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:31.685 [2024-07-15 19:02:11.873848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.685 [2024-07-15 19:02:11.873913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:31.685 [2024-07-15 19:02:11.873929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.685 [2024-07-15 19:02:11.873985] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:31.685 [2024-07-15 19:02:11.874002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.685 [2024-07-15 19:02:11.874058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:31.685 [2024-07-15 19:02:11.874075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.685 #44 NEW cov: 12214 ft: 15452 corp: 17/1004b lim: 85 exec/s: 44 rss: 73Mb L: 72/83 MS: 1 EraseBytes- 00:07:31.685 [2024-07-15 19:02:11.913971] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:31.685 [2024-07-15 19:02:11.913998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.685 [2024-07-15 19:02:11.914049] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:31.685 [2024-07-15 19:02:11.914066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.685 [2024-07-15 19:02:11.914121] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:31.685 [2024-07-15 19:02:11.914136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.685 [2024-07-15 19:02:11.914193] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:31.685 [2024-07-15 19:02:11.914209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.685 #45 NEW cov: 12214 ft: 15476 corp: 18/1087b lim: 85 exec/s: 45 rss: 73Mb L: 83/83 MS: 1 CrossOver- 00:07:31.685 [2024-07-15 19:02:11.964076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:31.685 [2024-07-15 19:02:11.964103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.685 [2024-07-15 19:02:11.964169] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:31.685 [2024-07-15 19:02:11.964185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.685 [2024-07-15 19:02:11.964245] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:31.685 [2024-07-15 19:02:11.964262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.685 [2024-07-15 19:02:11.964320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:31.685 [2024-07-15 19:02:11.964335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.685 #46 NEW cov: 12214 ft: 15485 corp: 19/1170b lim: 85 exec/s: 46 rss: 73Mb L: 83/83 MS: 1 ChangeByte- 00:07:31.685 [2024-07-15 19:02:12.013894] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:31.685 [2024-07-15 19:02:12.013924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.685 [2024-07-15 19:02:12.014001] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:31.685 [2024-07-15 19:02:12.014018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.685 #47 NEW cov: 12214 ft: 15505 corp: 20/1212b lim: 85 exec/s: 47 rss: 73Mb L: 42/83 MS: 1 ChangeByte- 00:07:31.685 [2024-07-15 19:02:12.054144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:31.685 [2024-07-15 19:02:12.054171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.685 [2024-07-15 19:02:12.054209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:31.685 [2024-07-15 19:02:12.054228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.685 [2024-07-15 19:02:12.054288] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:31.685 [2024-07-15 19:02:12.054304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.685 #48 NEW cov: 12214 ft: 15551 corp: 21/1266b lim: 85 exec/s: 48 rss: 73Mb L: 54/83 MS: 1 ChangeBinInt- 00:07:31.685 [2024-07-15 19:02:12.094426] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:31.685 [2024-07-15 19:02:12.094453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.685 [2024-07-15 19:02:12.094501] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:31.685 [2024-07-15 19:02:12.094517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.685 [2024-07-15 19:02:12.094575] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:31.685 [2024-07-15 19:02:12.094588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.685 [2024-07-15 19:02:12.094645] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:31.685 [2024-07-15 19:02:12.094661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.943 [2024-07-15 19:02:12.144576] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:31.943 [2024-07-15 19:02:12.144603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.943 [2024-07-15 19:02:12.144669] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:31.943 [2024-07-15 19:02:12.144685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.943 [2024-07-15 19:02:12.144742] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:31.943 [2024-07-15 19:02:12.144758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.943 [2024-07-15 19:02:12.144816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:31.943 [2024-07-15 19:02:12.144830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.943 #50 NEW cov: 12214 ft: 15562 corp: 22/1349b lim: 85 exec/s: 50 rss: 74Mb L: 83/83 MS: 2 ChangeByte-CopyPart- 00:07:31.943 [2024-07-15 19:02:12.184535] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:31.944 [2024-07-15 19:02:12.184566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.944 [2024-07-15 19:02:12.184613] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:31.944 [2024-07-15 19:02:12.184629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.944 [2024-07-15 19:02:12.184687] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:31.944 [2024-07-15 19:02:12.184703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.944 #51 NEW cov: 12214 ft: 15649 corp: 23/1408b lim: 85 exec/s: 51 rss: 74Mb L: 59/83 MS: 1 CrossOver- 00:07:31.944 [2024-07-15 19:02:12.224606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:31.944 [2024-07-15 19:02:12.224632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.944 [2024-07-15 19:02:12.224673] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:31.944 [2024-07-15 19:02:12.224689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.944 [2024-07-15 19:02:12.224747] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:31.944 [2024-07-15 19:02:12.224763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.944 #52 NEW cov: 12214 ft: 15679 corp: 24/1462b lim: 85 exec/s: 52 rss: 74Mb L: 54/83 MS: 1 PersAutoDict- DE: "\377\377\377\003"- 00:07:31.944 [2024-07-15 19:02:12.274630] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:31.944 [2024-07-15 19:02:12.274658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.944 [2024-07-15 19:02:12.274725] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:31.944 [2024-07-15 19:02:12.274742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.944 #53 NEW cov: 12214 ft: 15681 corp: 25/1504b lim: 85 exec/s: 53 rss: 74Mb L: 42/83 MS: 1 ChangeBit- 00:07:31.944 [2024-07-15 19:02:12.325056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:31.944 [2024-07-15 19:02:12.325083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.944 [2024-07-15 19:02:12.325148] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:31.944 [2024-07-15 19:02:12.325164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.944 [2024-07-15 19:02:12.325231] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:31.944 [2024-07-15 19:02:12.325247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.944 [2024-07-15 19:02:12.325315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:31.944 [2024-07-15 19:02:12.325329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.944 #54 NEW cov: 12214 ft: 15688 corp: 26/1587b lim: 85 exec/s: 54 rss: 74Mb L: 83/83 MS: 1 ChangeBit- 00:07:32.202 [2024-07-15 19:02:12.375200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:32.202 [2024-07-15 19:02:12.375243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.202 [2024-07-15 19:02:12.375301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:32.202 [2024-07-15 19:02:12.375318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.202 [2024-07-15 19:02:12.375383] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:32.202 [2024-07-15 19:02:12.375400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.202 [2024-07-15 19:02:12.375456] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:32.202 [2024-07-15 19:02:12.375479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.202 #55 NEW cov: 12214 ft: 15728 corp: 27/1670b lim: 85 exec/s: 55 rss: 74Mb L: 83/83 MS: 1 ShuffleBytes- 00:07:32.202 [2024-07-15 19:02:12.425530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:32.202 [2024-07-15 19:02:12.425561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.203 [2024-07-15 19:02:12.425622] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:32.203 [2024-07-15 19:02:12.425639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.203 [2024-07-15 19:02:12.425699] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:32.203 [2024-07-15 19:02:12.425715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.203 [2024-07-15 19:02:12.425771] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:32.203 [2024-07-15 19:02:12.425786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.203 [2024-07-15 19:02:12.425847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:32.203 [2024-07-15 19:02:12.425862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:32.203 #56 NEW cov: 12214 ft: 15802 corp: 28/1755b lim: 85 exec/s: 56 rss: 74Mb L: 85/85 MS: 1 CrossOver- 00:07:32.203 [2024-07-15 19:02:12.465304] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:32.203 [2024-07-15 19:02:12.465334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.203 [2024-07-15 19:02:12.465389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:32.203 [2024-07-15 19:02:12.465405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.203 [2024-07-15 19:02:12.465464] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:32.203 [2024-07-15 19:02:12.465480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.203 #57 NEW cov: 12214 ft: 15820 corp: 29/1817b lim: 85 exec/s: 57 rss: 74Mb L: 62/85 MS: 1 CMP- DE: "eQ@|p8\023\000"- 00:07:32.203 [2024-07-15 19:02:12.515656] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:32.203 [2024-07-15 19:02:12.515685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.203 [2024-07-15 19:02:12.515746] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:32.203 [2024-07-15 19:02:12.515765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.203 [2024-07-15 19:02:12.515823] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:32.203 [2024-07-15 19:02:12.515838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.203 [2024-07-15 19:02:12.515897] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:32.203 [2024-07-15 19:02:12.515913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.203 #58 NEW cov: 12214 ft: 15837 corp: 30/1897b lim: 85 exec/s: 58 rss: 74Mb L: 80/85 MS: 1 CopyPart- 00:07:32.203 [2024-07-15 19:02:12.565767] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:32.203 [2024-07-15 19:02:12.565798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.203 [2024-07-15 19:02:12.565852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:32.203 [2024-07-15 19:02:12.565869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.203 [2024-07-15 19:02:12.565927] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:32.203 [2024-07-15 19:02:12.565944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.203 [2024-07-15 19:02:12.566004] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:32.203 [2024-07-15 19:02:12.566021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.203 #59 NEW cov: 12214 ft: 15849 corp: 31/1973b lim: 85 exec/s: 59 rss: 74Mb L: 76/85 MS: 1 EraseBytes- 00:07:32.203 [2024-07-15 19:02:12.605713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:32.203 [2024-07-15 19:02:12.605744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.203 [2024-07-15 19:02:12.605782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:32.203 [2024-07-15 19:02:12.605798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.203 [2024-07-15 19:02:12.605856] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:32.203 [2024-07-15 19:02:12.605872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.462 #60 NEW cov: 12214 ft: 15856 corp: 32/2027b lim: 85 exec/s: 60 rss: 74Mb L: 54/85 MS: 1 EraseBytes- 00:07:32.462 [2024-07-15 19:02:12.655858] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:32.462 [2024-07-15 19:02:12.655889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.462 [2024-07-15 19:02:12.655946] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:32.462 [2024-07-15 19:02:12.655961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.462 [2024-07-15 19:02:12.656020] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:32.462 [2024-07-15 19:02:12.656038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.462 #61 NEW cov: 12214 ft: 15865 corp: 33/2090b lim: 85 exec/s: 61 rss: 74Mb L: 63/85 MS: 1 ShuffleBytes- 00:07:32.462 [2024-07-15 19:02:12.706337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:32.462 [2024-07-15 19:02:12.706365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.462 [2024-07-15 19:02:12.706421] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:32.462 [2024-07-15 19:02:12.706436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.462 [2024-07-15 19:02:12.706494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:32.462 [2024-07-15 19:02:12.706510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.462 [2024-07-15 19:02:12.706565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:32.462 [2024-07-15 19:02:12.706581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.462 [2024-07-15 19:02:12.706636] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:32.462 [2024-07-15 19:02:12.706652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:32.462 #62 NEW cov: 12214 ft: 15867 corp: 34/2175b lim: 85 exec/s: 62 rss: 74Mb L: 85/85 MS: 1 CrossOver- 00:07:32.462 [2024-07-15 19:02:12.746251] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:32.462 [2024-07-15 19:02:12.746279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.462 [2024-07-15 19:02:12.746330] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:32.462 [2024-07-15 19:02:12.746346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.462 [2024-07-15 19:02:12.746400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:32.462 [2024-07-15 19:02:12.746414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.462 [2024-07-15 19:02:12.746471] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:32.462 [2024-07-15 19:02:12.746486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.462 #63 NEW cov: 12214 ft: 15884 corp: 35/2253b lim: 85 exec/s: 63 rss: 74Mb L: 78/85 MS: 1 PersAutoDict- DE: "eQ@|p8\023\000"- 00:07:32.462 [2024-07-15 19:02:12.786019] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:32.462 [2024-07-15 19:02:12.786047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.462 [2024-07-15 19:02:12.786098] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:32.462 [2024-07-15 19:02:12.786114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.462 #64 NEW cov: 12214 ft: 15899 corp: 36/2299b lim: 85 exec/s: 64 rss: 74Mb L: 46/85 MS: 1 EraseBytes- 00:07:32.462 [2024-07-15 19:02:12.826466] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:32.462 [2024-07-15 19:02:12.826494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.462 [2024-07-15 19:02:12.826539] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:32.462 [2024-07-15 19:02:12.826559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.462 [2024-07-15 19:02:12.826628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:32.462 [2024-07-15 19:02:12.826644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.462 [2024-07-15 19:02:12.826699] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:32.462 [2024-07-15 19:02:12.826715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.462 #65 NEW cov: 12214 ft: 15927 corp: 37/2367b lim: 85 exec/s: 32 rss: 74Mb L: 68/85 MS: 1 CrossOver- 00:07:32.462 #65 DONE cov: 12214 ft: 15927 corp: 37/2367b lim: 85 exec/s: 32 rss: 74Mb 00:07:32.462 ###### Recommended dictionary. ###### 00:07:32.462 "\377\377\377\003" # Uses: 1 00:07:32.462 "eQ@|p8\023\000" # Uses: 1 00:07:32.462 ###### End of recommended dictionary. ###### 00:07:32.462 Done 65 runs in 2 second(s) 00:07:32.720 19:02:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:07:32.720 19:02:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:32.720 19:02:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:32.720 19:02:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:07:32.720 19:02:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:07:32.720 19:02:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:32.720 19:02:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:32.720 19:02:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:32.720 19:02:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:07:32.720 19:02:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:32.720 19:02:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:32.720 19:02:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:07:32.720 19:02:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:07:32.720 19:02:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:32.720 19:02:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:07:32.720 19:02:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:32.720 19:02:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:32.720 19:02:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:32.720 19:02:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:07:32.720 [2024-07-15 19:02:13.025463] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:32.720 [2024-07-15 19:02:13.025532] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid677226 ] 00:07:32.720 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.978 [2024-07-15 19:02:13.232960] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.978 [2024-07-15 19:02:13.302490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.978 [2024-07-15 19:02:13.361710] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.978 [2024-07-15 19:02:13.378008] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:07:32.978 INFO: Running with entropic power schedule (0xFF, 100). 00:07:32.978 INFO: Seed: 2366387806 00:07:33.236 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:33.236 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:33.236 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:33.236 INFO: A corpus is not provided, starting from an empty corpus 00:07:33.236 #2 INITED exec/s: 0 rss: 65Mb 00:07:33.236 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:33.236 This may also happen if the target rejected all inputs we tried so far 00:07:33.236 [2024-07-15 19:02:13.443193] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:33.236 [2024-07-15 19:02:13.443229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.494 NEW_FUNC[1/696]: 0x4ae840 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:07:33.494 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:33.494 #6 NEW cov: 11903 ft: 11904 corp: 2/7b lim: 25 exec/s: 0 rss: 72Mb L: 6/6 MS: 4 CopyPart-ChangeBit-EraseBytes-InsertRepeatedBytes- 00:07:33.494 [2024-07-15 19:02:13.784013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:33.494 [2024-07-15 19:02:13.784063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.494 #7 NEW cov: 12033 ft: 12573 corp: 3/13b lim: 25 exec/s: 0 rss: 72Mb L: 6/6 MS: 1 ChangeByte- 00:07:33.494 [2024-07-15 19:02:13.843995] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:33.494 [2024-07-15 19:02:13.844025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.494 #8 NEW cov: 12039 ft: 12866 corp: 4/19b lim: 25 exec/s: 0 rss: 72Mb L: 6/6 MS: 1 CopyPart- 00:07:33.494 [2024-07-15 19:02:13.894127] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:33.494 [2024-07-15 19:02:13.894156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.494 #9 NEW cov: 12124 ft: 13146 corp: 5/25b lim: 25 exec/s: 0 rss: 72Mb L: 6/6 MS: 1 CopyPart- 00:07:33.771 [2024-07-15 19:02:13.934234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:33.771 [2024-07-15 19:02:13.934262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.771 #15 NEW cov: 12124 ft: 13237 corp: 6/32b lim: 25 exec/s: 0 rss: 72Mb L: 7/7 MS: 1 InsertByte- 00:07:33.771 [2024-07-15 19:02:13.974511] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:33.771 [2024-07-15 19:02:13.974539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.771 [2024-07-15 19:02:13.974578] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:33.771 [2024-07-15 19:02:13.974594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.771 [2024-07-15 19:02:13.974648] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:33.771 [2024-07-15 19:02:13.974664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.771 #16 NEW cov: 12124 ft: 13767 corp: 7/47b lim: 25 exec/s: 0 rss: 73Mb L: 15/15 MS: 1 InsertRepeatedBytes- 00:07:33.771 [2024-07-15 19:02:14.024459] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:33.771 [2024-07-15 19:02:14.024489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.772 #17 NEW cov: 12124 ft: 13812 corp: 8/54b lim: 25 exec/s: 0 rss: 73Mb L: 7/15 MS: 1 InsertByte- 00:07:33.772 [2024-07-15 19:02:14.064564] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:33.772 [2024-07-15 19:02:14.064593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.772 #18 NEW cov: 12124 ft: 13900 corp: 9/61b lim: 25 exec/s: 0 rss: 73Mb L: 7/15 MS: 1 CMP- DE: "\000\000"- 00:07:33.772 [2024-07-15 19:02:14.104686] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:33.772 [2024-07-15 19:02:14.104714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.772 #19 NEW cov: 12124 ft: 13927 corp: 10/68b lim: 25 exec/s: 0 rss: 73Mb L: 7/15 MS: 1 CopyPart- 00:07:33.772 [2024-07-15 19:02:14.144808] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:33.772 [2024-07-15 19:02:14.144835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.772 #20 NEW cov: 12124 ft: 14040 corp: 11/75b lim: 25 exec/s: 0 rss: 73Mb L: 7/15 MS: 1 InsertByte- 00:07:33.772 [2024-07-15 19:02:14.194983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:33.772 [2024-07-15 19:02:14.195010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.029 #21 NEW cov: 12124 ft: 14069 corp: 12/82b lim: 25 exec/s: 0 rss: 73Mb L: 7/15 MS: 1 ChangeBinInt- 00:07:34.029 [2024-07-15 19:02:14.245071] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:34.029 [2024-07-15 19:02:14.245099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.029 #22 NEW cov: 12124 ft: 14090 corp: 13/89b lim: 25 exec/s: 0 rss: 73Mb L: 7/15 MS: 1 InsertByte- 00:07:34.029 [2024-07-15 19:02:14.285183] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:34.029 [2024-07-15 19:02:14.285210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.029 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:34.029 #23 NEW cov: 12147 ft: 14137 corp: 14/96b lim: 25 exec/s: 0 rss: 73Mb L: 7/15 MS: 1 ChangeBit- 00:07:34.029 [2024-07-15 19:02:14.335431] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:34.029 [2024-07-15 19:02:14.335457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.029 [2024-07-15 19:02:14.335496] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:34.029 [2024-07-15 19:02:14.335511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.029 #24 NEW cov: 12147 ft: 14352 corp: 15/108b lim: 25 exec/s: 0 rss: 73Mb L: 12/15 MS: 1 InsertRepeatedBytes- 00:07:34.029 [2024-07-15 19:02:14.375452] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:34.029 [2024-07-15 19:02:14.375479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.029 #25 NEW cov: 12147 ft: 14375 corp: 16/117b lim: 25 exec/s: 0 rss: 73Mb L: 9/15 MS: 1 CopyPart- 00:07:34.029 [2024-07-15 19:02:14.425603] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:34.029 [2024-07-15 19:02:14.425633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.287 #26 NEW cov: 12147 ft: 14411 corp: 17/126b lim: 25 exec/s: 26 rss: 73Mb L: 9/15 MS: 1 CopyPart- 00:07:34.287 [2024-07-15 19:02:14.475702] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:34.287 [2024-07-15 19:02:14.475729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.287 #27 NEW cov: 12147 ft: 14430 corp: 18/133b lim: 25 exec/s: 27 rss: 73Mb L: 7/15 MS: 1 ChangeBinInt- 00:07:34.287 [2024-07-15 19:02:14.515935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:34.287 [2024-07-15 19:02:14.515962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.287 [2024-07-15 19:02:14.516013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:34.287 [2024-07-15 19:02:14.516028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.287 #28 NEW cov: 12147 ft: 14467 corp: 19/144b lim: 25 exec/s: 28 rss: 73Mb L: 11/15 MS: 1 CopyPart- 00:07:34.287 [2024-07-15 19:02:14.555917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:34.287 [2024-07-15 19:02:14.555945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.287 #29 NEW cov: 12147 ft: 14498 corp: 20/150b lim: 25 exec/s: 29 rss: 73Mb L: 6/15 MS: 1 PersAutoDict- DE: "\000\000"- 00:07:34.287 [2024-07-15 19:02:14.596047] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:34.287 [2024-07-15 19:02:14.596074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.288 #30 NEW cov: 12147 ft: 14507 corp: 21/157b lim: 25 exec/s: 30 rss: 73Mb L: 7/15 MS: 1 ShuffleBytes- 00:07:34.288 [2024-07-15 19:02:14.636163] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:34.288 [2024-07-15 19:02:14.636189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.288 #31 NEW cov: 12147 ft: 14542 corp: 22/166b lim: 25 exec/s: 31 rss: 73Mb L: 9/15 MS: 1 PersAutoDict- DE: "\000\000"- 00:07:34.288 [2024-07-15 19:02:14.676515] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:34.288 [2024-07-15 19:02:14.676540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.288 [2024-07-15 19:02:14.676602] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:34.288 [2024-07-15 19:02:14.676618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.288 [2024-07-15 19:02:14.676674] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:34.288 [2024-07-15 19:02:14.676688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.288 #32 NEW cov: 12147 ft: 14562 corp: 23/183b lim: 25 exec/s: 32 rss: 73Mb L: 17/17 MS: 1 CopyPart- 00:07:34.546 [2024-07-15 19:02:14.726415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:34.546 [2024-07-15 19:02:14.726441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.546 #33 NEW cov: 12147 ft: 14569 corp: 24/191b lim: 25 exec/s: 33 rss: 73Mb L: 8/17 MS: 1 InsertByte- 00:07:34.546 [2024-07-15 19:02:14.776529] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:34.546 [2024-07-15 19:02:14.776555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.546 #35 NEW cov: 12147 ft: 14580 corp: 25/198b lim: 25 exec/s: 35 rss: 74Mb L: 7/17 MS: 2 EraseBytes-CopyPart- 00:07:34.546 [2024-07-15 19:02:14.826916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:34.546 [2024-07-15 19:02:14.826943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.546 [2024-07-15 19:02:14.826998] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:34.546 [2024-07-15 19:02:14.827012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.546 [2024-07-15 19:02:14.827067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:34.546 [2024-07-15 19:02:14.827083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.546 #36 NEW cov: 12147 ft: 14678 corp: 26/213b lim: 25 exec/s: 36 rss: 74Mb L: 15/17 MS: 1 CrossOver- 00:07:34.546 [2024-07-15 19:02:14.876907] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:34.546 [2024-07-15 19:02:14.876935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.546 [2024-07-15 19:02:14.876998] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:34.546 [2024-07-15 19:02:14.877014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.546 #37 NEW cov: 12147 ft: 14709 corp: 27/224b lim: 25 exec/s: 37 rss: 74Mb L: 11/17 MS: 1 PersAutoDict- DE: "\000\000"- 00:07:34.546 [2024-07-15 19:02:14.927169] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:34.546 [2024-07-15 19:02:14.927196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.546 [2024-07-15 19:02:14.927258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:34.546 [2024-07-15 19:02:14.927275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.546 [2024-07-15 19:02:14.927328] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:34.546 [2024-07-15 19:02:14.927344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.547 #38 NEW cov: 12147 ft: 14725 corp: 28/243b lim: 25 exec/s: 38 rss: 74Mb L: 19/19 MS: 1 PersAutoDict- DE: "\000\000"- 00:07:34.806 [2024-07-15 19:02:14.977077] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:34.806 [2024-07-15 19:02:14.977105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.806 #39 NEW cov: 12147 ft: 14727 corp: 29/251b lim: 25 exec/s: 39 rss: 74Mb L: 8/19 MS: 1 InsertRepeatedBytes- 00:07:34.806 [2024-07-15 19:02:15.017144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:34.806 [2024-07-15 19:02:15.017171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.806 #40 NEW cov: 12147 ft: 14817 corp: 30/260b lim: 25 exec/s: 40 rss: 74Mb L: 9/19 MS: 1 CopyPart- 00:07:34.806 [2024-07-15 19:02:15.057560] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:34.806 [2024-07-15 19:02:15.057586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.806 [2024-07-15 19:02:15.057650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:34.806 [2024-07-15 19:02:15.057669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.806 [2024-07-15 19:02:15.057723] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:34.806 [2024-07-15 19:02:15.057739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.806 #41 NEW cov: 12147 ft: 14836 corp: 31/276b lim: 25 exec/s: 41 rss: 74Mb L: 16/19 MS: 1 CMP- DE: "W\003\000\000\000\000\000\000"- 00:07:34.806 [2024-07-15 19:02:15.107623] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:34.806 [2024-07-15 19:02:15.107650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.806 [2024-07-15 19:02:15.107695] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:34.806 [2024-07-15 19:02:15.107710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.806 [2024-07-15 19:02:15.107764] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:34.806 [2024-07-15 19:02:15.107779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.806 #42 NEW cov: 12147 ft: 14848 corp: 32/295b lim: 25 exec/s: 42 rss: 74Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:07:34.806 [2024-07-15 19:02:15.147770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:34.806 [2024-07-15 19:02:15.147800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.806 [2024-07-15 19:02:15.147840] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:34.806 [2024-07-15 19:02:15.147855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.806 [2024-07-15 19:02:15.147912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:34.806 [2024-07-15 19:02:15.147928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.806 #43 NEW cov: 12147 ft: 14849 corp: 33/312b lim: 25 exec/s: 43 rss: 74Mb L: 17/19 MS: 1 ChangeBinInt- 00:07:34.806 [2024-07-15 19:02:15.187626] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:34.806 [2024-07-15 19:02:15.187654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.806 #44 NEW cov: 12147 ft: 14854 corp: 34/320b lim: 25 exec/s: 44 rss: 74Mb L: 8/19 MS: 1 InsertByte- 00:07:34.806 [2024-07-15 19:02:15.228235] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:34.806 [2024-07-15 19:02:15.228262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.806 [2024-07-15 19:02:15.228333] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:34.806 [2024-07-15 19:02:15.228348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.806 [2024-07-15 19:02:15.228403] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:34.806 [2024-07-15 19:02:15.228419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.806 [2024-07-15 19:02:15.228472] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:34.806 [2024-07-15 19:02:15.228487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.806 [2024-07-15 19:02:15.228542] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:34.806 [2024-07-15 19:02:15.228557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:35.065 #45 NEW cov: 12147 ft: 15322 corp: 35/345b lim: 25 exec/s: 45 rss: 74Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:07:35.065 [2024-07-15 19:02:15.267873] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:35.065 [2024-07-15 19:02:15.267900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.065 #46 NEW cov: 12147 ft: 15328 corp: 36/354b lim: 25 exec/s: 46 rss: 74Mb L: 9/25 MS: 1 PersAutoDict- DE: "\000\000"- 00:07:35.065 [2024-07-15 19:02:15.308479] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:35.065 [2024-07-15 19:02:15.308507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.065 [2024-07-15 19:02:15.308572] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:35.065 [2024-07-15 19:02:15.308587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.065 [2024-07-15 19:02:15.308641] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:35.065 [2024-07-15 19:02:15.308657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.065 [2024-07-15 19:02:15.308710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:35.065 [2024-07-15 19:02:15.308726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:35.065 [2024-07-15 19:02:15.308779] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:35.065 [2024-07-15 19:02:15.308793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:35.065 #47 NEW cov: 12147 ft: 15329 corp: 37/379b lim: 25 exec/s: 47 rss: 74Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:07:35.065 [2024-07-15 19:02:15.358113] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:35.065 [2024-07-15 19:02:15.358141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.065 #48 NEW cov: 12147 ft: 15339 corp: 38/386b lim: 25 exec/s: 48 rss: 74Mb L: 7/25 MS: 1 InsertByte- 00:07:35.065 [2024-07-15 19:02:15.398190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:35.066 [2024-07-15 19:02:15.398223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.066 #49 NEW cov: 12147 ft: 15340 corp: 39/395b lim: 25 exec/s: 24 rss: 74Mb L: 9/25 MS: 1 InsertByte- 00:07:35.066 #49 DONE cov: 12147 ft: 15340 corp: 39/395b lim: 25 exec/s: 24 rss: 74Mb 00:07:35.066 ###### Recommended dictionary. ###### 00:07:35.066 "\000\000" # Uses: 5 00:07:35.066 "W\003\000\000\000\000\000\000" # Uses: 0 00:07:35.066 ###### End of recommended dictionary. ###### 00:07:35.066 Done 49 runs in 2 second(s) 00:07:35.324 19:02:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:07:35.324 19:02:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:35.325 19:02:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:35.325 19:02:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:07:35.325 19:02:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:07:35.325 19:02:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:35.325 19:02:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:35.325 19:02:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:35.325 19:02:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:07:35.325 19:02:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:35.325 19:02:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:35.325 19:02:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:07:35.325 19:02:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:07:35.325 19:02:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:35.325 19:02:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:07:35.325 19:02:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:35.325 19:02:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:35.325 19:02:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:35.325 19:02:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:07:35.325 [2024-07-15 19:02:15.614662] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:35.325 [2024-07-15 19:02:15.614732] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid677591 ] 00:07:35.325 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.583 [2024-07-15 19:02:15.823469] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.583 [2024-07-15 19:02:15.895117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.583 [2024-07-15 19:02:15.954743] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.583 [2024-07-15 19:02:15.971047] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:07:35.583 INFO: Running with entropic power schedule (0xFF, 100). 00:07:35.583 INFO: Seed: 662413822 00:07:35.842 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:35.842 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:35.842 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:35.842 INFO: A corpus is not provided, starting from an empty corpus 00:07:35.842 #2 INITED exec/s: 0 rss: 64Mb 00:07:35.842 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:35.842 This may also happen if the target rejected all inputs we tried so far 00:07:35.842 [2024-07-15 19:02:16.029590] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2821266740061667328 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.842 [2024-07-15 19:02:16.029621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.100 NEW_FUNC[1/696]: 0x4af920 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:07:36.100 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:36.100 #5 NEW cov: 11974 ft: 11976 corp: 2/32b lim: 100 exec/s: 0 rss: 72Mb L: 31/31 MS: 3 ChangeBit-InsertRepeatedBytes-InsertRepeatedBytes- 00:07:36.100 [2024-07-15 19:02:16.380624] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2821266740061667328 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.100 [2024-07-15 19:02:16.380682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.100 [2024-07-15 19:02:16.380755] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:2821266740684990247 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.100 [2024-07-15 19:02:16.380779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.100 NEW_FUNC[1/1]: 0xf46e20 in rte_get_timer_cycles /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/include/generic/rte_cycles.h:94 00:07:36.100 #6 NEW cov: 12105 ft: 13387 corp: 3/76b lim: 100 exec/s: 0 rss: 72Mb L: 44/44 MS: 1 CopyPart- 00:07:36.100 [2024-07-15 19:02:16.440742] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2821266740061667328 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.100 [2024-07-15 19:02:16.440773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.100 [2024-07-15 19:02:16.440810] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:168160601858 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.100 [2024-07-15 19:02:16.440825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.100 [2024-07-15 19:02:16.440879] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:2821223692227782439 len:40 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.100 [2024-07-15 19:02:16.440894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.100 #7 NEW cov: 12111 ft: 14015 corp: 4/151b lim: 100 exec/s: 0 rss: 72Mb L: 75/75 MS: 1 CrossOver- 00:07:36.100 [2024-07-15 19:02:16.490577] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2821266740061667328 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.100 [2024-07-15 19:02:16.490606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.100 #13 NEW cov: 12196 ft: 14315 corp: 5/183b lim: 100 exec/s: 0 rss: 72Mb L: 32/75 MS: 1 InsertByte- 00:07:36.359 [2024-07-15 19:02:16.530691] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2821266740061667328 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.359 [2024-07-15 19:02:16.530720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.359 #14 NEW cov: 12196 ft: 14451 corp: 6/214b lim: 100 exec/s: 0 rss: 72Mb L: 31/75 MS: 1 CopyPart- 00:07:36.359 [2024-07-15 19:02:16.570795] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2821266740061667328 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.359 [2024-07-15 19:02:16.570825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.359 #15 NEW cov: 12196 ft: 14534 corp: 7/245b lim: 100 exec/s: 0 rss: 72Mb L: 31/75 MS: 1 ShuffleBytes- 00:07:36.359 [2024-07-15 19:02:16.611037] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2821266740061667328 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.359 [2024-07-15 19:02:16.611064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.359 [2024-07-15 19:02:16.611134] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:654321408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.359 [2024-07-15 19:02:16.611151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.359 #16 NEW cov: 12196 ft: 14586 corp: 8/288b lim: 100 exec/s: 0 rss: 72Mb L: 43/75 MS: 1 CopyPart- 00:07:36.359 [2024-07-15 19:02:16.661037] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2821266740061667328 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.359 [2024-07-15 19:02:16.661065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.359 #17 NEW cov: 12196 ft: 14634 corp: 9/319b lim: 100 exec/s: 0 rss: 73Mb L: 31/75 MS: 1 ChangeBit- 00:07:36.359 [2024-07-15 19:02:16.711165] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2821266740061667328 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.359 [2024-07-15 19:02:16.711193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.359 #18 NEW cov: 12196 ft: 14659 corp: 10/351b lim: 100 exec/s: 0 rss: 73Mb L: 32/75 MS: 1 InsertByte- 00:07:36.359 [2024-07-15 19:02:16.751260] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2821266740061667328 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.359 [2024-07-15 19:02:16.751290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.359 #19 NEW cov: 12196 ft: 14786 corp: 11/382b lim: 100 exec/s: 0 rss: 73Mb L: 31/75 MS: 1 ChangeBinInt- 00:07:36.618 [2024-07-15 19:02:16.791398] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2821266740061667328 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.618 [2024-07-15 19:02:16.791426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.618 #20 NEW cov: 12196 ft: 14820 corp: 12/413b lim: 100 exec/s: 0 rss: 73Mb L: 31/75 MS: 1 ShuffleBytes- 00:07:36.618 [2024-07-15 19:02:16.841546] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2821266740061667328 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.618 [2024-07-15 19:02:16.841574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.618 #21 NEW cov: 12196 ft: 14861 corp: 13/445b lim: 100 exec/s: 0 rss: 73Mb L: 32/75 MS: 1 ShuffleBytes- 00:07:36.618 [2024-07-15 19:02:16.891694] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2810289048466237223 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.618 [2024-07-15 19:02:16.891722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.618 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:36.618 #22 NEW cov: 12219 ft: 14905 corp: 14/477b lim: 100 exec/s: 0 rss: 73Mb L: 32/75 MS: 1 ShuffleBytes- 00:07:36.618 [2024-07-15 19:02:16.942206] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.618 [2024-07-15 19:02:16.942239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.618 [2024-07-15 19:02:16.942302] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.618 [2024-07-15 19:02:16.942320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.618 [2024-07-15 19:02:16.942371] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.618 [2024-07-15 19:02:16.942387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.618 [2024-07-15 19:02:16.942440] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.618 [2024-07-15 19:02:16.942458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:36.618 #23 NEW cov: 12219 ft: 15290 corp: 15/570b lim: 100 exec/s: 0 rss: 73Mb L: 93/93 MS: 1 InsertRepeatedBytes- 00:07:36.618 [2024-07-15 19:02:16.981889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:11020572716826624 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.618 [2024-07-15 19:02:16.981917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.618 #24 NEW cov: 12219 ft: 15314 corp: 16/607b lim: 100 exec/s: 0 rss: 73Mb L: 37/93 MS: 1 CrossOver- 00:07:36.618 [2024-07-15 19:02:17.022070] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2821266740061667328 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.618 [2024-07-15 19:02:17.022097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.877 #25 NEW cov: 12219 ft: 15335 corp: 17/639b lim: 100 exec/s: 25 rss: 73Mb L: 32/93 MS: 1 InsertByte- 00:07:36.877 [2024-07-15 19:02:17.072177] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2810289048466237223 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.877 [2024-07-15 19:02:17.072205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.877 #26 NEW cov: 12219 ft: 15380 corp: 18/672b lim: 100 exec/s: 26 rss: 73Mb L: 33/93 MS: 1 InsertByte- 00:07:36.877 [2024-07-15 19:02:17.122476] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2821266740061667328 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.877 [2024-07-15 19:02:17.122502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.877 [2024-07-15 19:02:17.122559] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:654321408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.877 [2024-07-15 19:02:17.122574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.877 #27 NEW cov: 12219 ft: 15419 corp: 19/715b lim: 100 exec/s: 27 rss: 73Mb L: 43/93 MS: 1 ChangeBit- 00:07:36.877 [2024-07-15 19:02:17.172431] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2821266740061667328 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.877 [2024-07-15 19:02:17.172458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.877 #28 NEW cov: 12219 ft: 15456 corp: 20/745b lim: 100 exec/s: 28 rss: 73Mb L: 30/93 MS: 1 CrossOver- 00:07:36.877 [2024-07-15 19:02:17.212542] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2821257943968645120 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.877 [2024-07-15 19:02:17.212570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.877 #29 NEW cov: 12219 ft: 15474 corp: 21/776b lim: 100 exec/s: 29 rss: 73Mb L: 31/93 MS: 1 ChangeBinInt- 00:07:36.877 [2024-07-15 19:02:17.252677] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2821266740061667328 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.877 [2024-07-15 19:02:17.252704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.877 #30 NEW cov: 12219 ft: 15496 corp: 22/806b lim: 100 exec/s: 30 rss: 73Mb L: 30/93 MS: 1 ChangeByte- 00:07:36.877 [2024-07-15 19:02:17.302857] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2821266740061667328 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.877 [2024-07-15 19:02:17.302885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.135 #31 NEW cov: 12219 ft: 15514 corp: 23/834b lim: 100 exec/s: 31 rss: 73Mb L: 28/93 MS: 1 EraseBytes- 00:07:37.135 [2024-07-15 19:02:17.343100] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2821266740061667328 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.135 [2024-07-15 19:02:17.343127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.135 [2024-07-15 19:02:17.343163] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:2821266740684990247 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.135 [2024-07-15 19:02:17.343178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.135 #32 NEW cov: 12219 ft: 15527 corp: 24/878b lim: 100 exec/s: 32 rss: 73Mb L: 44/93 MS: 1 CrossOver- 00:07:37.135 [2024-07-15 19:02:17.383231] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2821266740061667328 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.135 [2024-07-15 19:02:17.383257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.135 [2024-07-15 19:02:17.383312] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:36028797675841280 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.135 [2024-07-15 19:02:17.383327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.135 #33 NEW cov: 12219 ft: 15608 corp: 25/934b lim: 100 exec/s: 33 rss: 74Mb L: 56/93 MS: 1 CopyPart- 00:07:37.135 [2024-07-15 19:02:17.433208] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2821266740061667328 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.135 [2024-07-15 19:02:17.433240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.135 #34 NEW cov: 12219 ft: 15631 corp: 26/965b lim: 100 exec/s: 34 rss: 74Mb L: 31/93 MS: 1 CopyPart- 00:07:37.135 [2024-07-15 19:02:17.473284] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2821266740061667328 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.135 [2024-07-15 19:02:17.473311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.135 #35 NEW cov: 12219 ft: 15642 corp: 27/999b lim: 100 exec/s: 35 rss: 74Mb L: 34/93 MS: 1 CMP- DE: "M\001\000\000"- 00:07:37.135 [2024-07-15 19:02:17.513534] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2821266740061667328 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.135 [2024-07-15 19:02:17.513562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.135 [2024-07-15 19:02:17.513614] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:36028797675841280 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.135 [2024-07-15 19:02:17.513630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.135 #36 NEW cov: 12219 ft: 15651 corp: 28/1056b lim: 100 exec/s: 36 rss: 74Mb L: 57/93 MS: 1 InsertByte- 00:07:37.135 [2024-07-15 19:02:17.563522] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2821266740061667328 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.135 [2024-07-15 19:02:17.563549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.393 #37 NEW cov: 12219 ft: 15725 corp: 29/1086b lim: 100 exec/s: 37 rss: 74Mb L: 30/93 MS: 1 CopyPart- 00:07:37.393 [2024-07-15 19:02:17.603670] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2821257943968645120 len:9000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.393 [2024-07-15 19:02:17.603696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.393 #38 NEW cov: 12219 ft: 15751 corp: 30/1117b lim: 100 exec/s: 38 rss: 74Mb L: 31/93 MS: 1 ChangeBit- 00:07:37.393 [2024-07-15 19:02:17.653978] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2821266740061667328 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.393 [2024-07-15 19:02:17.654005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.393 [2024-07-15 19:02:17.654074] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:36028797675841280 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.393 [2024-07-15 19:02:17.654090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.393 #39 NEW cov: 12219 ft: 15775 corp: 31/1174b lim: 100 exec/s: 39 rss: 74Mb L: 57/93 MS: 1 ChangeBit- 00:07:37.393 [2024-07-15 19:02:17.704231] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744070656098303 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.393 [2024-07-15 19:02:17.704256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.393 [2024-07-15 19:02:17.704309] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.393 [2024-07-15 19:02:17.704325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.393 [2024-07-15 19:02:17.704379] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.393 [2024-07-15 19:02:17.704395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:37.393 #43 NEW cov: 12219 ft: 15791 corp: 32/1248b lim: 100 exec/s: 43 rss: 74Mb L: 74/93 MS: 4 ChangeByte-CopyPart-ChangeBinInt-CrossOver- 00:07:37.393 [2024-07-15 19:02:17.744048] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2821257943968645120 len:9000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.393 [2024-07-15 19:02:17.744075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.393 #44 NEW cov: 12219 ft: 15804 corp: 33/1279b lim: 100 exec/s: 44 rss: 74Mb L: 31/93 MS: 1 CopyPart- 00:07:37.393 [2024-07-15 19:02:17.794313] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.393 [2024-07-15 19:02:17.794340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.393 [2024-07-15 19:02:17.794378] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.393 [2024-07-15 19:02:17.794394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.652 #45 NEW cov: 12219 ft: 15820 corp: 34/1329b lim: 100 exec/s: 45 rss: 74Mb L: 50/93 MS: 1 EraseBytes- 00:07:37.652 [2024-07-15 19:02:17.844290] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2821266808781144064 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.652 [2024-07-15 19:02:17.844316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.652 #46 NEW cov: 12219 ft: 15830 corp: 35/1359b lim: 100 exec/s: 46 rss: 74Mb L: 30/93 MS: 1 ChangeBit- 00:07:37.652 [2024-07-15 19:02:17.894414] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2821266740061667328 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.652 [2024-07-15 19:02:17.894446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.652 #47 NEW cov: 12219 ft: 15835 corp: 36/1390b lim: 100 exec/s: 47 rss: 74Mb L: 31/93 MS: 1 CrossOver- 00:07:37.652 [2024-07-15 19:02:17.934722] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:432345568522534911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.652 [2024-07-15 19:02:17.934751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.652 [2024-07-15 19:02:17.934807] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.652 [2024-07-15 19:02:17.934822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.652 #48 NEW cov: 12219 ft: 15871 corp: 37/1440b lim: 100 exec/s: 48 rss: 74Mb L: 50/93 MS: 1 ChangeBinInt- 00:07:37.652 [2024-07-15 19:02:17.984731] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2821266740061667328 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.652 [2024-07-15 19:02:17.984759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.652 #49 NEW cov: 12219 ft: 15881 corp: 38/1471b lim: 100 exec/s: 49 rss: 74Mb L: 31/93 MS: 1 InsertByte- 00:07:37.652 [2024-07-15 19:02:18.024954] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2821266740061667328 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.652 [2024-07-15 19:02:18.024982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.653 [2024-07-15 19:02:18.025035] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:36028797675841280 len:10024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.653 [2024-07-15 19:02:18.025051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.653 #50 NEW cov: 12219 ft: 15889 corp: 39/1528b lim: 100 exec/s: 25 rss: 75Mb L: 57/93 MS: 1 ChangeBit- 00:07:37.653 #50 DONE cov: 12219 ft: 15889 corp: 39/1528b lim: 100 exec/s: 25 rss: 75Mb 00:07:37.653 ###### Recommended dictionary. ###### 00:07:37.653 "M\001\000\000" # Uses: 0 00:07:37.653 ###### End of recommended dictionary. ###### 00:07:37.653 Done 50 runs in 2 second(s) 00:07:37.911 19:02:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:07:37.911 19:02:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:37.911 19:02:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:37.911 19:02:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:07:37.911 00:07:37.911 real 1m5.614s 00:07:37.911 user 1m40.690s 00:07:37.911 sys 0m8.227s 00:07:37.911 19:02:18 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.911 19:02:18 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:37.911 ************************************ 00:07:37.911 END TEST nvmf_llvm_fuzz 00:07:37.911 ************************************ 00:07:37.911 19:02:18 llvm_fuzz -- common/autotest_common.sh@1142 -- # return 0 00:07:37.911 19:02:18 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:37.911 19:02:18 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:37.911 19:02:18 llvm_fuzz -- fuzz/llvm.sh@63 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:37.911 19:02:18 llvm_fuzz -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:37.911 19:02:18 llvm_fuzz -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.912 19:02:18 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:37.912 ************************************ 00:07:37.912 START TEST vfio_llvm_fuzz 00:07:37.912 ************************************ 00:07:37.912 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:38.174 * Looking for test storage... 00:07:38.174 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:38.174 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:38.175 #define SPDK_CONFIG_H 00:07:38.175 #define SPDK_CONFIG_APPS 1 00:07:38.175 #define SPDK_CONFIG_ARCH native 00:07:38.175 #undef SPDK_CONFIG_ASAN 00:07:38.175 #undef SPDK_CONFIG_AVAHI 00:07:38.175 #undef SPDK_CONFIG_CET 00:07:38.175 #define SPDK_CONFIG_COVERAGE 1 00:07:38.175 #define SPDK_CONFIG_CROSS_PREFIX 00:07:38.175 #undef SPDK_CONFIG_CRYPTO 00:07:38.175 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:38.175 #undef SPDK_CONFIG_CUSTOMOCF 00:07:38.175 #undef SPDK_CONFIG_DAOS 00:07:38.175 #define SPDK_CONFIG_DAOS_DIR 00:07:38.175 #define SPDK_CONFIG_DEBUG 1 00:07:38.175 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:38.175 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:38.175 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:38.175 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:38.175 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:38.175 #undef SPDK_CONFIG_DPDK_UADK 00:07:38.175 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:38.175 #define SPDK_CONFIG_EXAMPLES 1 00:07:38.175 #undef SPDK_CONFIG_FC 00:07:38.175 #define SPDK_CONFIG_FC_PATH 00:07:38.175 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:38.175 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:38.175 #undef SPDK_CONFIG_FUSE 00:07:38.175 #define SPDK_CONFIG_FUZZER 1 00:07:38.175 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:38.175 #undef SPDK_CONFIG_GOLANG 00:07:38.175 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:38.175 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:38.175 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:38.175 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:38.175 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:38.175 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:38.175 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:38.175 #define SPDK_CONFIG_IDXD 1 00:07:38.175 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:38.175 #undef SPDK_CONFIG_IPSEC_MB 00:07:38.175 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:38.175 #define SPDK_CONFIG_ISAL 1 00:07:38.175 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:38.175 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:38.175 #define SPDK_CONFIG_LIBDIR 00:07:38.175 #undef SPDK_CONFIG_LTO 00:07:38.175 #define SPDK_CONFIG_MAX_LCORES 128 00:07:38.175 #define SPDK_CONFIG_NVME_CUSE 1 00:07:38.175 #undef SPDK_CONFIG_OCF 00:07:38.175 #define SPDK_CONFIG_OCF_PATH 00:07:38.175 #define SPDK_CONFIG_OPENSSL_PATH 00:07:38.175 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:38.175 #define SPDK_CONFIG_PGO_DIR 00:07:38.175 #undef SPDK_CONFIG_PGO_USE 00:07:38.175 #define SPDK_CONFIG_PREFIX /usr/local 00:07:38.175 #undef SPDK_CONFIG_RAID5F 00:07:38.175 #undef SPDK_CONFIG_RBD 00:07:38.175 #define SPDK_CONFIG_RDMA 1 00:07:38.175 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:38.175 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:38.175 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:38.175 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:38.175 #undef SPDK_CONFIG_SHARED 00:07:38.175 #undef SPDK_CONFIG_SMA 00:07:38.175 #define SPDK_CONFIG_TESTS 1 00:07:38.175 #undef SPDK_CONFIG_TSAN 00:07:38.175 #define SPDK_CONFIG_UBLK 1 00:07:38.175 #define SPDK_CONFIG_UBSAN 1 00:07:38.175 #undef SPDK_CONFIG_UNIT_TESTS 00:07:38.175 #undef SPDK_CONFIG_URING 00:07:38.175 #define SPDK_CONFIG_URING_PATH 00:07:38.175 #undef SPDK_CONFIG_URING_ZNS 00:07:38.175 #undef SPDK_CONFIG_USDT 00:07:38.175 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:38.175 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:38.175 #define SPDK_CONFIG_VFIO_USER 1 00:07:38.175 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:38.175 #define SPDK_CONFIG_VHOST 1 00:07:38.175 #define SPDK_CONFIG_VIRTIO 1 00:07:38.175 #undef SPDK_CONFIG_VTUNE 00:07:38.175 #define SPDK_CONFIG_VTUNE_DIR 00:07:38.175 #define SPDK_CONFIG_WERROR 1 00:07:38.175 #define SPDK_CONFIG_WPDK_DIR 00:07:38.175 #undef SPDK_CONFIG_XNVME 00:07:38.175 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:07:38.175 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 1 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : true 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:38.176 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # : 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 0 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # cat 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j72 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@318 -- # [[ -z 677995 ]] 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@318 -- # kill -0 677995 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.HLouQ1 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.HLouQ1/tests/vfio /tmp/spdk.HLouQ1 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@327 -- # df -T 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=945618944 00:07:38.177 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4338810880 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=50201980928 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742551040 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=11540570112 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30866563072 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871273472 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=12342710272 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348510208 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=5799936 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30870695936 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871277568 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=581632 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=6174248960 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174253056 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:38.178 * Looking for test storage... 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # target_space=50201980928 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@381 -- # new_size=13755162624 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:38.178 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # return 0 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1687 -- # true 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:07:38.178 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:38.178 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:38.179 19:02:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:07:38.436 [2024-07-15 19:02:18.609037] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:38.436 [2024-07-15 19:02:18.609134] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid678034 ] 00:07:38.436 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.436 [2024-07-15 19:02:18.697548] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.436 [2024-07-15 19:02:18.784367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.694 INFO: Running with entropic power schedule (0xFF, 100). 00:07:38.694 INFO: Seed: 3653415725 00:07:38.694 INFO: Loaded 1 modules (355049 inline 8-bit counters): 355049 [0x296c90c, 0x29c33f5), 00:07:38.694 INFO: Loaded 1 PC tables (355049 PCs): 355049 [0x29c33f8,0x2f2e288), 00:07:38.694 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:38.694 INFO: A corpus is not provided, starting from an empty corpus 00:07:38.694 #2 INITED exec/s: 0 rss: 65Mb 00:07:38.694 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:38.694 This may also happen if the target rejected all inputs we tried so far 00:07:38.694 [2024-07-15 19:02:19.031979] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:07:39.210 NEW_FUNC[1/657]: 0x4838a0 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:07:39.210 NEW_FUNC[2/657]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:39.210 #9 NEW cov: 10949 ft: 10835 corp: 2/7b lim: 6 exec/s: 0 rss: 72Mb L: 6/6 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:39.210 NEW_FUNC[1/1]: 0x1404770 in index_to_sg_t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:677 00:07:39.210 #30 NEW cov: 10977 ft: 14253 corp: 3/13b lim: 6 exec/s: 0 rss: 73Mb L: 6/6 MS: 1 CopyPart- 00:07:39.468 #31 NEW cov: 10977 ft: 15365 corp: 4/19b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 ChangeBit- 00:07:39.468 NEW_FUNC[1/1]: 0x1a4a600 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:39.468 #32 NEW cov: 10994 ft: 15738 corp: 5/25b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 ShuffleBytes- 00:07:39.725 #33 NEW cov: 10994 ft: 16103 corp: 6/31b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 CopyPart- 00:07:39.725 #35 NEW cov: 10994 ft: 17302 corp: 7/37b lim: 6 exec/s: 35 rss: 74Mb L: 6/6 MS: 2 CMP-InsertByte- DE: "\004\000\000\000"- 00:07:39.983 #38 NEW cov: 10994 ft: 17630 corp: 8/43b lim: 6 exec/s: 38 rss: 74Mb L: 6/6 MS: 3 EraseBytes-ChangeBit-CopyPart- 00:07:40.241 #39 NEW cov: 10994 ft: 17891 corp: 9/49b lim: 6 exec/s: 39 rss: 74Mb L: 6/6 MS: 1 CrossOver- 00:07:40.500 #40 NEW cov: 10994 ft: 18167 corp: 10/55b lim: 6 exec/s: 40 rss: 74Mb L: 6/6 MS: 1 CopyPart- 00:07:40.758 #45 NEW cov: 11001 ft: 18447 corp: 11/61b lim: 6 exec/s: 45 rss: 74Mb L: 6/6 MS: 5 InsertByte-InsertRepeatedBytes-PersAutoDict-ChangeByte-InsertByte- DE: "\004\000\000\000"- 00:07:41.017 #46 NEW cov: 11001 ft: 18672 corp: 12/67b lim: 6 exec/s: 23 rss: 74Mb L: 6/6 MS: 1 ChangeBit- 00:07:41.017 #46 DONE cov: 11001 ft: 18672 corp: 12/67b lim: 6 exec/s: 23 rss: 74Mb 00:07:41.017 ###### Recommended dictionary. ###### 00:07:41.017 "\004\000\000\000" # Uses: 1 00:07:41.017 ###### End of recommended dictionary. ###### 00:07:41.017 Done 46 runs in 2 second(s) 00:07:41.017 [2024-07-15 19:02:21.216413] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:07:41.276 19:02:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:07:41.276 19:02:21 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:41.276 19:02:21 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:41.276 19:02:21 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:41.276 19:02:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:07:41.276 19:02:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:41.276 19:02:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:41.276 19:02:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:41.276 19:02:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:07:41.276 19:02:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:07:41.276 19:02:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:07:41.276 19:02:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:07:41.276 19:02:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:41.276 19:02:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:41.276 19:02:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:41.276 19:02:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:07:41.276 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:41.276 19:02:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:41.276 19:02:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:41.276 19:02:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:07:41.276 [2024-07-15 19:02:21.510782] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:41.276 [2024-07-15 19:02:21.510868] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid678428 ] 00:07:41.276 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.276 [2024-07-15 19:02:21.600650] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.276 [2024-07-15 19:02:21.686486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.536 INFO: Running with entropic power schedule (0xFF, 100). 00:07:41.536 INFO: Seed: 2273482403 00:07:41.536 INFO: Loaded 1 modules (355049 inline 8-bit counters): 355049 [0x296c90c, 0x29c33f5), 00:07:41.536 INFO: Loaded 1 PC tables (355049 PCs): 355049 [0x29c33f8,0x2f2e288), 00:07:41.536 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:41.536 INFO: A corpus is not provided, starting from an empty corpus 00:07:41.536 #2 INITED exec/s: 0 rss: 66Mb 00:07:41.536 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:41.536 This may also happen if the target rejected all inputs we tried so far 00:07:41.536 [2024-07-15 19:02:21.944173] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:07:41.795 [2024-07-15 19:02:22.020148] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:41.795 [2024-07-15 19:02:22.020177] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:41.795 [2024-07-15 19:02:22.020210] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:42.054 NEW_FUNC[1/660]: 0x483e40 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:07:42.054 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:42.054 #5 NEW cov: 10956 ft: 10694 corp: 2/5b lim: 4 exec/s: 0 rss: 72Mb L: 4/4 MS: 3 ShuffleBytes-CrossOver-CMP- DE: "\377\000"- 00:07:42.312 [2024-07-15 19:02:22.528484] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:42.312 [2024-07-15 19:02:22.528523] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:42.312 [2024-07-15 19:02:22.528557] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:42.312 #6 NEW cov: 10970 ft: 14234 corp: 3/9b lim: 4 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 ChangeByte- 00:07:42.312 [2024-07-15 19:02:22.727341] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:42.312 [2024-07-15 19:02:22.727364] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:42.312 [2024-07-15 19:02:22.727382] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:42.577 NEW_FUNC[1/1]: 0x1a4a600 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:42.577 #10 NEW cov: 10990 ft: 15349 corp: 4/13b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 4 ChangeBit-CopyPart-InsertByte-InsertByte- 00:07:42.577 [2024-07-15 19:02:22.946491] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:42.577 [2024-07-15 19:02:22.946515] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:42.577 [2024-07-15 19:02:22.946532] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:42.836 #15 NEW cov: 10990 ft: 16104 corp: 5/17b lim: 4 exec/s: 15 rss: 74Mb L: 4/4 MS: 5 EraseBytes-PersAutoDict-ChangeBinInt-ChangeByte-CopyPart- DE: "\377\000"- 00:07:42.836 [2024-07-15 19:02:23.148123] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:42.836 [2024-07-15 19:02:23.148146] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:42.836 [2024-07-15 19:02:23.148163] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:42.836 #16 NEW cov: 10990 ft: 16752 corp: 6/21b lim: 4 exec/s: 16 rss: 74Mb L: 4/4 MS: 1 ChangeBinInt- 00:07:43.095 [2024-07-15 19:02:23.342906] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:43.095 [2024-07-15 19:02:23.342927] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:43.095 [2024-07-15 19:02:23.342960] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:43.095 #17 NEW cov: 10990 ft: 17647 corp: 7/25b lim: 4 exec/s: 17 rss: 74Mb L: 4/4 MS: 1 ChangeBit- 00:07:43.354 [2024-07-15 19:02:23.549332] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:43.354 [2024-07-15 19:02:23.549366] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:43.354 [2024-07-15 19:02:23.549382] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:43.354 #18 NEW cov: 10990 ft: 17789 corp: 8/29b lim: 4 exec/s: 18 rss: 74Mb L: 4/4 MS: 1 ChangeBinInt- 00:07:43.354 [2024-07-15 19:02:23.746972] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:43.354 [2024-07-15 19:02:23.746996] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:43.354 [2024-07-15 19:02:23.747028] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:43.613 #28 NEW cov: 10997 ft: 18051 corp: 9/33b lim: 4 exec/s: 28 rss: 74Mb L: 4/4 MS: 5 EraseBytes-CrossOver-ChangeBinInt-ChangeBit-CopyPart- 00:07:43.613 [2024-07-15 19:02:23.962794] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:43.613 [2024-07-15 19:02:23.962817] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:43.613 [2024-07-15 19:02:23.962850] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:43.872 #34 NEW cov: 10997 ft: 18404 corp: 10/37b lim: 4 exec/s: 17 rss: 74Mb L: 4/4 MS: 1 ChangeByte- 00:07:43.872 #34 DONE cov: 10997 ft: 18404 corp: 10/37b lim: 4 exec/s: 17 rss: 74Mb 00:07:43.872 ###### Recommended dictionary. ###### 00:07:43.872 "\377\000" # Uses: 1 00:07:43.872 ###### End of recommended dictionary. ###### 00:07:43.872 Done 34 runs in 2 second(s) 00:07:43.872 [2024-07-15 19:02:24.110421] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:07:44.132 19:02:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:07:44.132 19:02:24 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:44.132 19:02:24 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:44.132 19:02:24 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:44.132 19:02:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:07:44.132 19:02:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:44.132 19:02:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:44.132 19:02:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:44.132 19:02:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:07:44.132 19:02:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:07:44.132 19:02:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:07:44.132 19:02:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:07:44.132 19:02:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:44.132 19:02:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:44.132 19:02:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:44.132 19:02:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:07:44.132 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:44.132 19:02:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:44.132 19:02:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:44.132 19:02:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:07:44.132 [2024-07-15 19:02:24.427225] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:44.132 [2024-07-15 19:02:24.427313] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid678800 ] 00:07:44.132 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.132 [2024-07-15 19:02:24.517523] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.391 [2024-07-15 19:02:24.604663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.391 INFO: Running with entropic power schedule (0xFF, 100). 00:07:44.391 INFO: Seed: 901504100 00:07:44.663 INFO: Loaded 1 modules (355049 inline 8-bit counters): 355049 [0x296c90c, 0x29c33f5), 00:07:44.663 INFO: Loaded 1 PC tables (355049 PCs): 355049 [0x29c33f8,0x2f2e288), 00:07:44.663 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:44.663 INFO: A corpus is not provided, starting from an empty corpus 00:07:44.663 #2 INITED exec/s: 0 rss: 66Mb 00:07:44.663 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:44.663 This may also happen if the target rejected all inputs we tried so far 00:07:44.663 [2024-07-15 19:02:24.868141] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:07:44.663 [2024-07-15 19:02:24.941937] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:44.929 NEW_FUNC[1/658]: 0x484820 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:07:44.929 NEW_FUNC[2/658]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:44.929 #42 NEW cov: 10939 ft: 10699 corp: 2/9b lim: 8 exec/s: 0 rss: 72Mb L: 8/8 MS: 5 ChangeBit-CrossOver-InsertRepeatedBytes-ChangeBit-CopyPart- 00:07:45.187 [2024-07-15 19:02:25.452816] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:45.187 NEW_FUNC[1/1]: 0x170d570 in nvme_qpair_is_admin_queue /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1157 00:07:45.187 #48 NEW cov: 10956 ft: 14167 corp: 3/17b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 ChangeBinInt- 00:07:45.446 [2024-07-15 19:02:25.659841] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:45.446 NEW_FUNC[1/1]: 0x1a4a600 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:45.446 #49 NEW cov: 10973 ft: 15920 corp: 4/25b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 1 ChangeBit- 00:07:45.446 [2024-07-15 19:02:25.865312] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:45.704 #65 NEW cov: 10973 ft: 16320 corp: 5/33b lim: 8 exec/s: 65 rss: 74Mb L: 8/8 MS: 1 CrossOver- 00:07:45.704 [2024-07-15 19:02:26.059623] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:45.962 #66 NEW cov: 10973 ft: 17123 corp: 6/41b lim: 8 exec/s: 66 rss: 74Mb L: 8/8 MS: 1 CrossOver- 00:07:45.962 [2024-07-15 19:02:26.251779] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:45.962 #71 NEW cov: 10973 ft: 17362 corp: 7/49b lim: 8 exec/s: 71 rss: 74Mb L: 8/8 MS: 5 CrossOver-ChangeByte-CrossOver-ChangeBit-InsertByte- 00:07:46.221 [2024-07-15 19:02:26.441352] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:46.221 #72 NEW cov: 10973 ft: 17807 corp: 8/57b lim: 8 exec/s: 72 rss: 74Mb L: 8/8 MS: 1 CrossOver- 00:07:46.221 [2024-07-15 19:02:26.644743] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:46.495 #73 NEW cov: 10980 ft: 17906 corp: 9/65b lim: 8 exec/s: 73 rss: 74Mb L: 8/8 MS: 1 CMP- DE: "\317 \000\000"- 00:07:46.495 [2024-07-15 19:02:26.847753] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:46.800 #74 NEW cov: 10980 ft: 18090 corp: 10/73b lim: 8 exec/s: 37 rss: 74Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:46.800 #74 DONE cov: 10980 ft: 18090 corp: 10/73b lim: 8 exec/s: 37 rss: 74Mb 00:07:46.800 ###### Recommended dictionary. ###### 00:07:46.800 "\317 \000\000" # Uses: 0 00:07:46.800 ###### End of recommended dictionary. ###### 00:07:46.800 Done 74 runs in 2 second(s) 00:07:46.800 [2024-07-15 19:02:26.993418] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:07:47.058 19:02:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:07:47.058 19:02:27 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:47.058 19:02:27 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:47.058 19:02:27 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:47.058 19:02:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:07:47.058 19:02:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:47.058 19:02:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:47.058 19:02:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:47.058 19:02:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:07:47.058 19:02:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:07:47.058 19:02:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:07:47.058 19:02:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:07:47.058 19:02:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:47.058 19:02:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:47.058 19:02:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:47.058 19:02:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:07:47.058 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:47.058 19:02:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:47.058 19:02:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:47.059 19:02:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:07:47.059 [2024-07-15 19:02:27.310389] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:47.059 [2024-07-15 19:02:27.310473] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid679179 ] 00:07:47.059 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.059 [2024-07-15 19:02:27.399249] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.059 [2024-07-15 19:02:27.487646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.317 INFO: Running with entropic power schedule (0xFF, 100). 00:07:47.317 INFO: Seed: 3775473455 00:07:47.317 INFO: Loaded 1 modules (355049 inline 8-bit counters): 355049 [0x296c90c, 0x29c33f5), 00:07:47.317 INFO: Loaded 1 PC tables (355049 PCs): 355049 [0x29c33f8,0x2f2e288), 00:07:47.317 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:47.317 INFO: A corpus is not provided, starting from an empty corpus 00:07:47.317 #2 INITED exec/s: 0 rss: 66Mb 00:07:47.317 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:47.317 This may also happen if the target rejected all inputs we tried so far 00:07:47.587 [2024-07-15 19:02:27.755672] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:07:47.854 NEW_FUNC[1/659]: 0x484f00 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:07:47.854 NEW_FUNC[2/659]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:47.854 #242 NEW cov: 10950 ft: 10583 corp: 2/33b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 5 InsertRepeatedBytes-ChangeBinInt-InsertByte-CopyPart-InsertByte- 00:07:48.112 #243 NEW cov: 10964 ft: 13608 corp: 3/65b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 CrossOver- 00:07:48.370 NEW_FUNC[1/1]: 0x1a4a600 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:48.370 #244 NEW cov: 10981 ft: 14580 corp: 4/97b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 CopyPart- 00:07:48.629 #245 NEW cov: 10981 ft: 15010 corp: 5/129b lim: 32 exec/s: 245 rss: 74Mb L: 32/32 MS: 1 ChangeByte- 00:07:48.629 #251 NEW cov: 10981 ft: 15423 corp: 6/161b lim: 32 exec/s: 251 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:48.888 #252 NEW cov: 10981 ft: 16198 corp: 7/193b lim: 32 exec/s: 252 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:49.147 #253 NEW cov: 10981 ft: 17004 corp: 8/225b lim: 32 exec/s: 253 rss: 74Mb L: 32/32 MS: 1 CrossOver- 00:07:49.147 #254 NEW cov: 10988 ft: 17327 corp: 9/257b lim: 32 exec/s: 254 rss: 74Mb L: 32/32 MS: 1 CrossOver- 00:07:49.406 #255 NEW cov: 10988 ft: 18635 corp: 10/289b lim: 32 exec/s: 127 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:49.406 #255 DONE cov: 10988 ft: 18635 corp: 10/289b lim: 32 exec/s: 127 rss: 74Mb 00:07:49.406 Done 255 runs in 2 second(s) 00:07:49.406 [2024-07-15 19:02:29.803416] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:07:49.664 19:02:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:07:49.664 19:02:30 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:49.664 19:02:30 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:49.664 19:02:30 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:49.664 19:02:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:07:49.664 19:02:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:49.664 19:02:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:49.664 19:02:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:49.664 19:02:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:07:49.664 19:02:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:07:49.664 19:02:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:07:49.664 19:02:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:07:49.664 19:02:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:49.664 19:02:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:49.664 19:02:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:49.664 19:02:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:07:49.664 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:49.664 19:02:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:49.664 19:02:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:49.664 19:02:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:07:49.922 [2024-07-15 19:02:30.118691] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:49.922 [2024-07-15 19:02:30.118782] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid679553 ] 00:07:49.922 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.922 [2024-07-15 19:02:30.206458] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.922 [2024-07-15 19:02:30.287346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.182 INFO: Running with entropic power schedule (0xFF, 100). 00:07:50.182 INFO: Seed: 2285517065 00:07:50.182 INFO: Loaded 1 modules (355049 inline 8-bit counters): 355049 [0x296c90c, 0x29c33f5), 00:07:50.182 INFO: Loaded 1 PC tables (355049 PCs): 355049 [0x29c33f8,0x2f2e288), 00:07:50.182 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:50.182 INFO: A corpus is not provided, starting from an empty corpus 00:07:50.182 #2 INITED exec/s: 0 rss: 66Mb 00:07:50.182 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:50.182 This may also happen if the target rejected all inputs we tried so far 00:07:50.182 [2024-07-15 19:02:30.543718] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:07:50.713 NEW_FUNC[1/659]: 0x485780 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:07:50.713 NEW_FUNC[2/659]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:50.713 #22 NEW cov: 10949 ft: 10710 corp: 2/33b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 5 CrossOver-InsertByte-EraseBytes-ChangeBit-InsertRepeatedBytes- 00:07:50.982 #33 NEW cov: 10964 ft: 13786 corp: 3/65b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 CMP- DE: "W\013\000\000"- 00:07:51.245 NEW_FUNC[1/1]: 0x1a4a600 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:51.245 #34 NEW cov: 10981 ft: 15343 corp: 4/97b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:51.245 #35 NEW cov: 10981 ft: 17039 corp: 5/129b lim: 32 exec/s: 35 rss: 74Mb L: 32/32 MS: 1 CrossOver- 00:07:51.503 #36 NEW cov: 10981 ft: 17402 corp: 6/161b lim: 32 exec/s: 36 rss: 74Mb L: 32/32 MS: 1 ChangeByte- 00:07:51.761 #37 NEW cov: 10981 ft: 17700 corp: 7/193b lim: 32 exec/s: 37 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:52.019 #38 NEW cov: 10981 ft: 17828 corp: 8/225b lim: 32 exec/s: 38 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:07:52.278 #39 NEW cov: 10988 ft: 18111 corp: 9/257b lim: 32 exec/s: 39 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:52.278 #45 NEW cov: 10988 ft: 18362 corp: 10/289b lim: 32 exec/s: 22 rss: 74Mb L: 32/32 MS: 1 ChangeByte- 00:07:52.278 #45 DONE cov: 10988 ft: 18362 corp: 10/289b lim: 32 exec/s: 22 rss: 74Mb 00:07:52.278 ###### Recommended dictionary. ###### 00:07:52.278 "W\013\000\000" # Uses: 0 00:07:52.278 ###### End of recommended dictionary. ###### 00:07:52.278 Done 45 runs in 2 second(s) 00:07:52.278 [2024-07-15 19:02:32.694407] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:07:52.537 19:02:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:07:52.537 19:02:32 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:52.537 19:02:32 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:52.537 19:02:32 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:52.537 19:02:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:07:52.537 19:02:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:52.537 19:02:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:52.537 19:02:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:52.537 19:02:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:07:52.537 19:02:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:07:52.537 19:02:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:07:52.537 19:02:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:07:52.537 19:02:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:52.537 19:02:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:52.537 19:02:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:52.537 19:02:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:07:52.537 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:52.797 19:02:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:52.797 19:02:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:52.797 19:02:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:07:52.797 [2024-07-15 19:02:32.996095] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:52.797 [2024-07-15 19:02:32.996173] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid679926 ] 00:07:52.797 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.797 [2024-07-15 19:02:33.081366] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.797 [2024-07-15 19:02:33.165350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.055 INFO: Running with entropic power schedule (0xFF, 100). 00:07:53.055 INFO: Seed: 872556069 00:07:53.055 INFO: Loaded 1 modules (355049 inline 8-bit counters): 355049 [0x296c90c, 0x29c33f5), 00:07:53.055 INFO: Loaded 1 PC tables (355049 PCs): 355049 [0x29c33f8,0x2f2e288), 00:07:53.055 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:53.055 INFO: A corpus is not provided, starting from an empty corpus 00:07:53.055 #2 INITED exec/s: 0 rss: 66Mb 00:07:53.055 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:53.055 This may also happen if the target rejected all inputs we tried so far 00:07:53.055 [2024-07-15 19:02:33.426904] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:07:53.314 [2024-07-15 19:02:33.506082] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:53.314 [2024-07-15 19:02:33.506121] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:53.573 NEW_FUNC[1/660]: 0x486180 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:07:53.573 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:53.573 #87 NEW cov: 10961 ft: 10883 corp: 2/14b lim: 13 exec/s: 0 rss: 72Mb L: 13/13 MS: 5 ChangeBit-CrossOver-InsertRepeatedBytes-ChangeBit-InsertRepeatedBytes- 00:07:53.830 [2024-07-15 19:02:34.012473] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:53.830 [2024-07-15 19:02:34.012521] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:53.830 #93 NEW cov: 10975 ft: 13733 corp: 3/27b lim: 13 exec/s: 0 rss: 73Mb L: 13/13 MS: 1 CopyPart- 00:07:53.830 [2024-07-15 19:02:34.201113] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:53.830 [2024-07-15 19:02:34.201148] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:54.089 NEW_FUNC[1/1]: 0x1a4a600 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:54.089 #94 NEW cov: 10992 ft: 15203 corp: 4/40b lim: 13 exec/s: 0 rss: 73Mb L: 13/13 MS: 1 CopyPart- 00:07:54.089 [2024-07-15 19:02:34.395268] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:54.089 [2024-07-15 19:02:34.395299] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:54.089 #95 NEW cov: 10992 ft: 15956 corp: 5/53b lim: 13 exec/s: 95 rss: 74Mb L: 13/13 MS: 1 CrossOver- 00:07:54.348 [2024-07-15 19:02:34.577956] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:54.348 [2024-07-15 19:02:34.577987] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:54.348 #101 NEW cov: 10992 ft: 16506 corp: 6/66b lim: 13 exec/s: 101 rss: 74Mb L: 13/13 MS: 1 ChangeByte- 00:07:54.348 [2024-07-15 19:02:34.762044] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:54.348 [2024-07-15 19:02:34.762075] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:54.607 #102 NEW cov: 10992 ft: 16844 corp: 7/79b lim: 13 exec/s: 102 rss: 74Mb L: 13/13 MS: 1 ShuffleBytes- 00:07:54.607 [2024-07-15 19:02:34.951995] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:54.607 [2024-07-15 19:02:34.952027] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:54.867 #103 NEW cov: 10992 ft: 17186 corp: 8/92b lim: 13 exec/s: 103 rss: 74Mb L: 13/13 MS: 1 CrossOver- 00:07:54.867 [2024-07-15 19:02:35.136728] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:54.867 [2024-07-15 19:02:35.136759] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:54.867 #109 NEW cov: 10999 ft: 17372 corp: 9/105b lim: 13 exec/s: 109 rss: 74Mb L: 13/13 MS: 1 ChangeBit- 00:07:55.125 [2024-07-15 19:02:35.333161] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:55.125 [2024-07-15 19:02:35.333192] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:55.125 #115 NEW cov: 10999 ft: 17754 corp: 10/118b lim: 13 exec/s: 57 rss: 74Mb L: 13/13 MS: 1 ChangeBit- 00:07:55.125 #115 DONE cov: 10999 ft: 17754 corp: 10/118b lim: 13 exec/s: 57 rss: 74Mb 00:07:55.125 Done 115 runs in 2 second(s) 00:07:55.125 [2024-07-15 19:02:35.458428] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:07:55.384 19:02:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:07:55.384 19:02:35 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:55.384 19:02:35 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:55.384 19:02:35 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:55.384 19:02:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:07:55.384 19:02:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:55.384 19:02:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:55.384 19:02:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:55.384 19:02:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:07:55.384 19:02:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:07:55.384 19:02:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:07:55.384 19:02:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:07:55.384 19:02:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:55.384 19:02:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:55.384 19:02:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:55.384 19:02:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:07:55.384 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:55.384 19:02:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:55.384 19:02:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:55.384 19:02:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:07:55.384 [2024-07-15 19:02:35.775852] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:55.385 [2024-07-15 19:02:35.775944] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid680302 ] 00:07:55.385 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.643 [2024-07-15 19:02:35.863320] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.643 [2024-07-15 19:02:35.948092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.902 INFO: Running with entropic power schedule (0xFF, 100). 00:07:55.902 INFO: Seed: 3651547057 00:07:55.902 INFO: Loaded 1 modules (355049 inline 8-bit counters): 355049 [0x296c90c, 0x29c33f5), 00:07:55.902 INFO: Loaded 1 PC tables (355049 PCs): 355049 [0x29c33f8,0x2f2e288), 00:07:55.902 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:55.902 INFO: A corpus is not provided, starting from an empty corpus 00:07:55.902 #2 INITED exec/s: 0 rss: 66Mb 00:07:55.902 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:55.902 This may also happen if the target rejected all inputs we tried so far 00:07:55.902 [2024-07-15 19:02:36.204713] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:07:55.902 [2024-07-15 19:02:36.281996] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:55.902 [2024-07-15 19:02:36.282033] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:56.419 NEW_FUNC[1/660]: 0x486e70 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:07:56.419 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:56.419 #3 NEW cov: 10953 ft: 10459 corp: 2/10b lim: 9 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:07:56.419 [2024-07-15 19:02:36.788302] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:56.419 [2024-07-15 19:02:36.788351] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:56.677 #9 NEW cov: 10967 ft: 13780 corp: 3/19b lim: 9 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 CopyPart- 00:07:56.677 [2024-07-15 19:02:36.995310] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:56.677 [2024-07-15 19:02:36.995344] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:56.936 NEW_FUNC[1/1]: 0x1a4a600 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:56.936 #10 NEW cov: 10984 ft: 14829 corp: 4/28b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:56.936 [2024-07-15 19:02:37.194788] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:56.936 [2024-07-15 19:02:37.194819] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:56.936 #11 NEW cov: 10984 ft: 14993 corp: 5/37b lim: 9 exec/s: 11 rss: 74Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:07:57.196 [2024-07-15 19:02:37.407325] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:57.196 [2024-07-15 19:02:37.407356] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:57.196 #12 NEW cov: 10984 ft: 15278 corp: 6/46b lim: 9 exec/s: 12 rss: 74Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:57.196 [2024-07-15 19:02:37.606342] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:57.196 [2024-07-15 19:02:37.606377] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:57.457 #13 NEW cov: 10984 ft: 16104 corp: 7/55b lim: 9 exec/s: 13 rss: 74Mb L: 9/9 MS: 1 CrossOver- 00:07:57.457 [2024-07-15 19:02:37.809353] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:57.457 [2024-07-15 19:02:37.809384] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:57.715 #14 NEW cov: 10984 ft: 16298 corp: 8/64b lim: 9 exec/s: 14 rss: 74Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:57.715 [2024-07-15 19:02:38.014256] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:57.715 [2024-07-15 19:02:38.014286] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:57.715 #15 NEW cov: 10991 ft: 16461 corp: 9/73b lim: 9 exec/s: 15 rss: 74Mb L: 9/9 MS: 1 CrossOver- 00:07:57.973 [2024-07-15 19:02:38.215909] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:57.973 [2024-07-15 19:02:38.215939] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:57.973 #16 pulse cov: 10991 ft: 16701 corp: 9/73b lim: 9 exec/s: 8 rss: 74Mb 00:07:57.973 #16 NEW cov: 10991 ft: 16701 corp: 10/82b lim: 9 exec/s: 8 rss: 74Mb L: 9/9 MS: 1 ShuffleBytes- 00:07:57.973 #16 DONE cov: 10991 ft: 16701 corp: 10/82b lim: 9 exec/s: 8 rss: 74Mb 00:07:57.973 Done 16 runs in 2 second(s) 00:07:57.973 [2024-07-15 19:02:38.349419] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:07:58.233 19:02:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:07:58.233 19:02:38 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:58.233 19:02:38 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:58.233 19:02:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:07:58.233 00:07:58.233 real 0m20.346s 00:07:58.233 user 0m28.303s 00:07:58.233 sys 0m2.112s 00:07:58.233 19:02:38 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.233 19:02:38 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:58.233 ************************************ 00:07:58.233 END TEST vfio_llvm_fuzz 00:07:58.233 ************************************ 00:07:58.492 19:02:38 llvm_fuzz -- common/autotest_common.sh@1142 -- # return 0 00:07:58.492 19:02:38 llvm_fuzz -- fuzz/llvm.sh@67 -- # [[ 1 -eq 0 ]] 00:07:58.492 00:07:58.492 real 1m26.254s 00:07:58.492 user 2m9.104s 00:07:58.492 sys 0m10.546s 00:07:58.492 19:02:38 llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.492 19:02:38 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:58.492 ************************************ 00:07:58.492 END TEST llvm_fuzz 00:07:58.492 ************************************ 00:07:58.492 19:02:38 -- common/autotest_common.sh@1142 -- # return 0 00:07:58.492 19:02:38 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:07:58.492 19:02:38 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:07:58.492 19:02:38 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:07:58.492 19:02:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:58.492 19:02:38 -- common/autotest_common.sh@10 -- # set +x 00:07:58.492 19:02:38 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:07:58.492 19:02:38 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:07:58.492 19:02:38 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:07:58.492 19:02:38 -- common/autotest_common.sh@10 -- # set +x 00:08:03.765 INFO: APP EXITING 00:08:03.765 INFO: killing all VMs 00:08:03.765 INFO: killing vhost app 00:08:03.765 WARN: no vhost pid file found 00:08:03.765 INFO: EXIT DONE 00:08:07.053 Waiting for block devices as requested 00:08:07.053 0000:5e:00.0 (144d a80a): vfio-pci -> nvme 00:08:07.053 0000:af:00.0 (8086 2701): vfio-pci -> nvme 00:08:07.053 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:07.053 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:07.053 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:07.053 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:07.312 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:07.312 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:07.312 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:07.570 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:07.570 0000:b0:00.0 (8086 2701): vfio-pci -> nvme 00:08:07.829 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:07.829 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:07.829 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:08.088 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:08.088 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:08.088 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:08.347 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:08.347 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:12.546 Cleaning 00:08:12.546 Removing: /dev/shm/spdk_tgt_trace.pid651780 00:08:12.546 Removing: /var/run/dpdk/spdk_pid651142 00:08:12.546 Removing: /var/run/dpdk/spdk_pid651780 00:08:12.546 Removing: /var/run/dpdk/spdk_pid652319 00:08:12.546 Removing: /var/run/dpdk/spdk_pid653057 00:08:12.546 Removing: /var/run/dpdk/spdk_pid653275 00:08:12.546 Removing: /var/run/dpdk/spdk_pid654061 00:08:12.546 Removing: /var/run/dpdk/spdk_pid654084 00:08:12.546 Removing: /var/run/dpdk/spdk_pid654406 00:08:12.546 Removing: /var/run/dpdk/spdk_pid654666 00:08:12.546 Removing: /var/run/dpdk/spdk_pid655016 00:08:12.546 Removing: /var/run/dpdk/spdk_pid655301 00:08:12.546 Removing: /var/run/dpdk/spdk_pid655545 00:08:12.546 Removing: /var/run/dpdk/spdk_pid655754 00:08:12.547 Removing: /var/run/dpdk/spdk_pid655952 00:08:12.547 Removing: /var/run/dpdk/spdk_pid656187 00:08:12.547 Removing: /var/run/dpdk/spdk_pid656969 00:08:12.547 Removing: /var/run/dpdk/spdk_pid659907 00:08:12.547 Removing: /var/run/dpdk/spdk_pid660134 00:08:12.547 Removing: /var/run/dpdk/spdk_pid660514 00:08:12.547 Removing: /var/run/dpdk/spdk_pid660528 00:08:12.547 Removing: /var/run/dpdk/spdk_pid661101 00:08:12.547 Removing: /var/run/dpdk/spdk_pid661120 00:08:12.547 Removing: /var/run/dpdk/spdk_pid661519 00:08:12.547 Removing: /var/run/dpdk/spdk_pid661701 00:08:12.547 Removing: /var/run/dpdk/spdk_pid661917 00:08:12.547 Removing: /var/run/dpdk/spdk_pid662101 00:08:12.547 Removing: /var/run/dpdk/spdk_pid662309 00:08:12.547 Removing: /var/run/dpdk/spdk_pid662332 00:08:12.547 Removing: /var/run/dpdk/spdk_pid662785 00:08:12.547 Removing: /var/run/dpdk/spdk_pid662986 00:08:12.547 Removing: /var/run/dpdk/spdk_pid663193 00:08:12.547 Removing: /var/run/dpdk/spdk_pid663433 00:08:12.547 Removing: /var/run/dpdk/spdk_pid663649 00:08:12.547 Removing: /var/run/dpdk/spdk_pid663670 00:08:12.547 Removing: /var/run/dpdk/spdk_pid663852 00:08:12.547 Removing: /var/run/dpdk/spdk_pid664101 00:08:12.547 Removing: /var/run/dpdk/spdk_pid664313 00:08:12.547 Removing: /var/run/dpdk/spdk_pid664515 00:08:12.547 Removing: /var/run/dpdk/spdk_pid664719 00:08:12.547 Removing: /var/run/dpdk/spdk_pid664920 00:08:12.547 Removing: /var/run/dpdk/spdk_pid665123 00:08:12.547 Removing: /var/run/dpdk/spdk_pid665328 00:08:12.547 Removing: /var/run/dpdk/spdk_pid665533 00:08:12.547 Removing: /var/run/dpdk/spdk_pid665732 00:08:12.547 Removing: /var/run/dpdk/spdk_pid665938 00:08:12.547 Removing: /var/run/dpdk/spdk_pid666137 00:08:12.547 Removing: /var/run/dpdk/spdk_pid666339 00:08:12.547 Removing: /var/run/dpdk/spdk_pid666547 00:08:12.547 Removing: /var/run/dpdk/spdk_pid666759 00:08:12.547 Removing: /var/run/dpdk/spdk_pid667007 00:08:12.547 Removing: /var/run/dpdk/spdk_pid667254 00:08:12.547 Removing: /var/run/dpdk/spdk_pid667530 00:08:12.547 Removing: /var/run/dpdk/spdk_pid667734 00:08:12.547 Removing: /var/run/dpdk/spdk_pid667939 00:08:12.547 Removing: /var/run/dpdk/spdk_pid668137 00:08:12.547 Removing: /var/run/dpdk/spdk_pid668214 00:08:12.547 Removing: /var/run/dpdk/spdk_pid668632 00:08:12.547 Removing: /var/run/dpdk/spdk_pid669091 00:08:12.547 Removing: /var/run/dpdk/spdk_pid669434 00:08:12.547 Removing: /var/run/dpdk/spdk_pid669774 00:08:12.547 Removing: /var/run/dpdk/spdk_pid670145 00:08:12.547 Removing: /var/run/dpdk/spdk_pid670513 00:08:12.547 Removing: /var/run/dpdk/spdk_pid670882 00:08:12.547 Removing: /var/run/dpdk/spdk_pid671250 00:08:12.547 Removing: /var/run/dpdk/spdk_pid671625 00:08:12.547 Removing: /var/run/dpdk/spdk_pid671962 00:08:12.547 Removing: /var/run/dpdk/spdk_pid672267 00:08:12.547 Removing: /var/run/dpdk/spdk_pid672576 00:08:12.547 Removing: /var/run/dpdk/spdk_pid672945 00:08:12.547 Removing: /var/run/dpdk/spdk_pid673314 00:08:12.547 Removing: /var/run/dpdk/spdk_pid673682 00:08:12.547 Removing: /var/run/dpdk/spdk_pid674053 00:08:12.547 Removing: /var/run/dpdk/spdk_pid674418 00:08:12.547 Removing: /var/run/dpdk/spdk_pid674774 00:08:12.547 Removing: /var/run/dpdk/spdk_pid675065 00:08:12.547 Removing: /var/run/dpdk/spdk_pid675406 00:08:12.547 Removing: /var/run/dpdk/spdk_pid675742 00:08:12.547 Removing: /var/run/dpdk/spdk_pid676114 00:08:12.547 Removing: /var/run/dpdk/spdk_pid676485 00:08:12.547 Removing: /var/run/dpdk/spdk_pid676850 00:08:12.547 Removing: /var/run/dpdk/spdk_pid677226 00:08:12.547 Removing: /var/run/dpdk/spdk_pid677591 00:08:12.547 Removing: /var/run/dpdk/spdk_pid678034 00:08:12.547 Removing: /var/run/dpdk/spdk_pid678428 00:08:12.547 Removing: /var/run/dpdk/spdk_pid678800 00:08:12.547 Removing: /var/run/dpdk/spdk_pid679179 00:08:12.547 Removing: /var/run/dpdk/spdk_pid679553 00:08:12.547 Removing: /var/run/dpdk/spdk_pid679926 00:08:12.547 Removing: /var/run/dpdk/spdk_pid680302 00:08:12.547 Clean 00:08:12.547 19:02:52 -- common/autotest_common.sh@1451 -- # return 0 00:08:12.547 19:02:52 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:08:12.547 19:02:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:12.547 19:02:52 -- common/autotest_common.sh@10 -- # set +x 00:08:12.547 19:02:52 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:08:12.547 19:02:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:12.547 19:02:52 -- common/autotest_common.sh@10 -- # set +x 00:08:12.547 19:02:52 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:12.547 19:02:52 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:08:12.547 19:02:52 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:08:12.547 19:02:52 -- spdk/autotest.sh@391 -- # hash lcov 00:08:12.547 19:02:52 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:08:12.805 19:02:52 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:12.805 19:02:52 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:08:12.805 19:02:52 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.805 19:02:52 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.805 19:02:52 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.805 19:02:52 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.805 19:02:52 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.805 19:02:52 -- paths/export.sh@5 -- $ export PATH 00:08:12.805 19:02:52 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.806 19:02:52 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:08:12.806 19:02:52 -- common/autobuild_common.sh@444 -- $ date +%s 00:08:12.806 19:02:52 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721062972.XXXXXX 00:08:12.806 19:02:53 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721062972.lHuE7O 00:08:12.806 19:02:53 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:08:12.806 19:02:53 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:08:12.806 19:02:53 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:08:12.806 19:02:53 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:08:12.806 19:02:53 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:08:12.806 19:02:53 -- common/autobuild_common.sh@460 -- $ get_config_params 00:08:12.806 19:02:53 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:08:12.806 19:02:53 -- common/autotest_common.sh@10 -- $ set +x 00:08:12.806 19:02:53 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:08:12.806 19:02:53 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:08:12.806 19:02:53 -- pm/common@17 -- $ local monitor 00:08:12.806 19:02:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:12.806 19:02:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:12.806 19:02:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:12.806 19:02:53 -- pm/common@21 -- $ date +%s 00:08:12.806 19:02:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:12.806 19:02:53 -- pm/common@21 -- $ date +%s 00:08:12.806 19:02:53 -- pm/common@25 -- $ sleep 1 00:08:12.806 19:02:53 -- pm/common@21 -- $ date +%s 00:08:12.806 19:02:53 -- pm/common@21 -- $ date +%s 00:08:12.806 19:02:53 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721062973 00:08:12.806 19:02:53 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721062973 00:08:12.806 19:02:53 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721062973 00:08:12.806 19:02:53 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721062973 00:08:12.806 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721062973_collect-vmstat.pm.log 00:08:12.806 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721062973_collect-cpu-load.pm.log 00:08:12.806 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721062973_collect-cpu-temp.pm.log 00:08:12.806 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721062973_collect-bmc-pm.bmc.pm.log 00:08:13.741 19:02:54 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:08:13.741 19:02:54 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j72 00:08:13.741 19:02:54 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:13.741 19:02:54 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:08:13.741 19:02:54 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:08:13.741 19:02:54 -- spdk/autopackage.sh@19 -- $ timing_finish 00:08:13.741 19:02:54 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:08:13.741 19:02:54 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:08:13.741 19:02:54 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:13.741 19:02:54 -- spdk/autopackage.sh@20 -- $ exit 0 00:08:13.741 19:02:54 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:08:13.741 19:02:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:13.741 19:02:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:13.741 19:02:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:13.741 19:02:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:08:13.741 19:02:54 -- pm/common@44 -- $ pid=686213 00:08:13.741 19:02:54 -- pm/common@50 -- $ kill -TERM 686213 00:08:13.741 19:02:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:13.741 19:02:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:08:13.741 19:02:54 -- pm/common@44 -- $ pid=686216 00:08:13.741 19:02:54 -- pm/common@50 -- $ kill -TERM 686216 00:08:13.741 19:02:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:13.742 19:02:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:08:13.742 19:02:54 -- pm/common@44 -- $ pid=686219 00:08:13.742 19:02:54 -- pm/common@50 -- $ kill -TERM 686219 00:08:13.742 19:02:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:13.742 19:02:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:08:13.742 19:02:54 -- pm/common@44 -- $ pid=686269 00:08:13.742 19:02:54 -- pm/common@50 -- $ sudo -E kill -TERM 686269 00:08:13.742 + [[ -n 550728 ]] 00:08:13.742 + sudo kill 550728 00:08:13.751 [Pipeline] } 00:08:13.769 [Pipeline] // stage 00:08:13.774 [Pipeline] } 00:08:13.790 [Pipeline] // timeout 00:08:13.795 [Pipeline] } 00:08:13.813 [Pipeline] // catchError 00:08:13.819 [Pipeline] } 00:08:13.834 [Pipeline] // wrap 00:08:13.839 [Pipeline] } 00:08:13.855 [Pipeline] // catchError 00:08:13.864 [Pipeline] stage 00:08:13.867 [Pipeline] { (Epilogue) 00:08:13.880 [Pipeline] catchError 00:08:13.882 [Pipeline] { 00:08:13.896 [Pipeline] echo 00:08:13.897 Cleanup processes 00:08:13.903 [Pipeline] sh 00:08:14.186 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:14.186 610052 sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721062627 00:08:14.186 610078 bash /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721062627 00:08:14.186 686448 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:08:14.186 687088 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:14.200 [Pipeline] sh 00:08:14.484 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:14.484 ++ grep -v 'sudo pgrep' 00:08:14.484 ++ awk '{print $1}' 00:08:14.484 + sudo kill -9 686448 00:08:14.495 [Pipeline] sh 00:08:14.782 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:08:15.743 [Pipeline] sh 00:08:16.026 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:08:16.026 Artifacts sizes are good 00:08:16.039 [Pipeline] archiveArtifacts 00:08:16.045 Archiving artifacts 00:08:16.101 [Pipeline] sh 00:08:16.402 + sudo chown -R sys_sgci /var/jenkins/workspace/short-fuzz-phy-autotest 00:08:16.415 [Pipeline] cleanWs 00:08:16.424 [WS-CLEANUP] Deleting project workspace... 00:08:16.424 [WS-CLEANUP] Deferred wipeout is used... 00:08:16.446 [WS-CLEANUP] done 00:08:16.447 [Pipeline] } 00:08:16.462 [Pipeline] // catchError 00:08:16.473 [Pipeline] sh 00:08:16.811 + logger -p user.info -t JENKINS-CI 00:08:16.837 [Pipeline] } 00:08:16.850 [Pipeline] // stage 00:08:16.853 [Pipeline] } 00:08:16.864 [Pipeline] // node 00:08:16.867 [Pipeline] End of Pipeline 00:08:16.885 Finished: SUCCESS